This is the API-documentation for version 1.0 of the QEngine. The QEngine is a C++14 library for performing simulations and optimal control of quantum systems.
Instructions for installing the QEngine can be found in README.md. If you use the QEngine in your research or software please support us by citing,
The principles for this API are as follows
All functionality described in this API is supported and it will not change in any minor versions except for extremely serious bugs. However, some functionality may become deprecated in future versions. Functionality not discussed in this document is not considered part of the API and it may change even in minor versions.
The QEngine depends heavily on templates and class names are generally not part of the API. The exceptions to this rule are the basic mathematics types and the DataContainer. Objects are constructed from factory-functions starting with 'make' (e.g. makeOperator or makeState), which are either free functions or members. These factory-functions are part of the API, as is the use of the created objects.
QEngine has many debugging checks that may be numerically expensive. To disable them, set DISABLE_DEBUG_CHECKS set to true in your settings.txt file:
set(DISABLE_DEBUG_CHECKS false)
Some of the checks happen in headers, so you will also have to build you own code with the macro QENGINE_DISABLE_DEBUG_CHECKS set to true. How you set this macro depends on the build-system for your code, but if you use cmake, this will work:
add_definitions(-DQENGINE_DISABLE_DEBUG_CHECKS)
All the API is in the namespace qengine with a few sub-namespaces used to distinguish mostly the makeHilbertSpace-functions for different physics. For this reason all the examples assume,
using namespace qengine;
The code contains public functions and members that are not documented here. These are considered internal. To help distinguish them, they are generally kept in other namespaces, but not always. Member-functions that start with an underscore are always internal.
This section of the API discuss the fundamental classes for storing numbers.
For scalars several type-aliases exist to create the abstraction useful for physicists instead of programmers,
real
is double
count_t
is unsigned long long
complex
is std::complex<double>
Note, that the imaginary unit can be created using the C++ literal from the standard library,
using namespace std::literals::complex_literals;
auto z = 2.0+3.0i;
The vector-type in QEngine can contain either real (RVec) or complex (CVec) scalars. Both these types have the same constructors,
Vec(); // empty vector of size 0.
Vec(count_t size); // vector with 'size' elements, all set to 0.
Vec(count_t size, real fill); // vector with 'size' elements, all set to fill.
Vec(count_t size, real* data); // vector with 'size' elements from pointer.
Vec(arma::vec); // vector from armadillo-vector.
Vec{real, real, real...}; // vector with the elements passed in initializer list.
Vec(std::vector<real> data); // vector from standard-vector.
The complex CVec has one additional constructor,
CVec(rvec real, rvec imag);
This creates a complex vector with real and imaginary parts taken from two real vectors.
.at(count_t index) -> real& // returns reference to value at 'index'.
.front() -> real& // returns reference to first element.
.back() -> real& // returns reference to last element.
.size() -> count_t // number of elements.
.dim() -> count_t // same as size().
.norm() -> real // L2 norm of the vector.
.absSquare() -> RVec // vector of absolute square of each element.
.hasNan() -> bool // checks if vector contains a NaN.
Vectors with the same element-type (RVec or CVec) can be merged into one vector,
.append(vec) -> vec&
append(vec, vec) -> vec
Another option is to glue a vector to another vector in-place, or glue them out-of-place:
.glue(vec) -> vec&
glue(vec, vec) -> vec
When glueing, the first element of the second vector is ignored. As an example,
auto v = RVec{1,2,3};
auto v2 = glue(v,invert(v)); // results in vector {1,2,3,2,1}.
dot(vec, vec) -> scalar // L2 inner product
cdot(vec, vec) -> complex // same as dot with first element being complex conjugated.
conj(vec) -> vec // element-wise complex conjugation.
sum(vec) -> scalar // sums elements.
trapz(vec) -> scalar // trapezoidal integral of vector.
mean(vec) -> scalar // mean value of vector.
max(vec) -> scalar // element with largest value.
min(vec) -> scalar // element with smallest value.
re(cvec) -> rvec // real part of elements.
im(cvec) -> rvec // imaginary part of elements.
Similar to vectors the matrix-type in QEngine can contain either real (RMat) or complex (CMat) scalars. Both these types have the same constructors,
Mat(); // empty matrix of size 0 x 0.
Mat(count_t n_cols, count_t n_rows); // n_cols x n_rows matrix filled with zeros.
Mat(count_t n_cols, count_t n_rows, real/complex fill); // n_cols x n_rows matrix filled with fill.
Mat(count_t n_cols, count_t n_rows, real* data); // n_cols x n_rows matrix from pointer.
Mat(std::vector<vec> data); // matrix with list of vecs as columns.
// matrix from armadillo-matrix requires armadillo is explicitly included.
Mat(arma::mat<real/complex>);
// matrix with the elements in the curly bracket. Each sublist is a row in the matrix.
Mat{{scalar, scalar, ...}{scalar, scalar, ...}, {...}};
CMat has the additional constructor,
CMat(RMat real, RMat imag);
This creates complex matrix from matrices containing the real and imaginary parts.
.at(count_t row, count_t col) -> real& // returns reference to value at (row, col).
.col(count_t col_index) -> column-pointer object
n_elem() -> count_t // number of elements.
n_rows() -> count_t // number of rows.
n_cols() -> count_t // number of columns.
hasNan() -> bool // checks if vector contains a NaN.
kron(mat, mat) -> mat
A tuple is a collection that can hold a list of different types. There is a tuple in the C++-standard library that is the base for the QEngine implementation. In practice, this is used mostly to contain parameters for function-types (potential function, Hamiltonian function, operator function) and for time-stepping, where it is created implicitly from initializer-lists. For cases where it can not be created implicitly, it can be created using the function makeTuple
,
makeTuple(params...);
The standard-library tuple cannot be constructed implicitly, so this is just a small wrapper to add implicit constructors. The underlying std::tuple is a public member called data,
auto tup = makeTuple(...);
std::get<0>(tup.data);
The normalize-enum is used to indicate to functions if their output should be normalized or not. It is either NORMALIZE
or NORMALIZE_NOT
.
This section of the API discuss the fundamental concepts used in simulating the quantum dynamics.
The foundation of any simulation is the Hilbert space, as this is where states and operators exist. A state or an operator are assigned to a Hilbert space upon creation, and an operator can only be applied to a state in the same space.
For each physics there exists a type that represents states $|\psi\rangle$. For spatial physics, this is the generic f(x), while second quantized physics have a dedicated State-type.
The operator-interface covers a range of classes that can transform a state into another state $\hat{O}\cdot\left|\psi\right\rangle \rightarrow \left|\chi\right\rangle$. Operators exist for both spatial and second quantized physics, and in both cases they allow the following operations,
Operator*State -> State
expectationValue(Operator, State) -> real
cexpectationValue(Operator, State) -> complex
variance(Operator, State) -> real
cvariance(Operator, State) -> complex
standardDeviation(Operator, State) -> real
Where the expectationvalue $\langle \hat{O} \rangle = \langle \psi | \hat{O} | \psi |\rangle$ and variance $\sigma^2 = \langle \hat{O}^2 \rangle-\langle \hat{O} \rangle^2$ can be either real (for Hermitian operators/observables) or complex (all operators). The standard deviation $\sigma = \sqrt{\sigma^2}$ only works for operators that have real and positive variance (always fulfilled for Hermitian operators), and will otherwise throw an exception.
All of the above functions, except for the multiplication, assume the state to be normalized. If debug-checks are enabled, they will throw an error if the norm of the state is more than 1e-10 different from 1.
Most operators can be diagonalized to find eigenvalues and eigenvectors. This is done with the member-function makeSpectrum
, which takes different arguments depending on the operator since the diagonalization uses different algorithms, and returns either a spatial spectrum or a second quantized spectrum. In all cases, the users must specify the index of the highest desired eigenvalue/eigenstate, as it is generally cheaper to find only the lowest N eigenstates. For instance,
.makeSpectrum(count_t largestEigenstate) -> spectrum
This returns a spectrum with the lowest-energy states up to largestEigenstate. With an input 0, it will contain only the ground state and with 1 it will contain the ground state and the first excited state.
Generating the spectrum is useful when both eigenstates and -values or multiple eigenstates are needed. However for generating linear combinations of eigenstates, a simpler syntax exists by using operator[index] on operators. The resulting object only make sense when used inline in a makeState
or makeWavefunction
call where additional parameters to the diagonalization may be passed in. The general syntax is,
auto state = makeState(LinearCombinationOfEigenstates,
shouldNormalize = NORMALIZE_NOT,
diagonalization-parameters...);
An example is,
auto O = makeOperator(...);
auto psi = makeState(O[0]+2*O[1]+3i*O[2], NORMALIZE); // diagonalization is performed.
In this example, the call to makeState infers that only the three lowest eigenstates are required. Hence this code is equivalent to,
auto O = makeOperator(...);
auto spec = O.makeSpectrum(2);
auto psi = normalize(spec.makeLinearCombination({1, 2, 3i}));
Time evolution is done with time-steppers that take the role of the time evolution operator
$$
U(t+\Delta t; t) \left| \Psi(t) \right\rangle = \left| \Psi(t+\Delta t) \right\rangle,
$$
which is found by solving the Schrödinger equation
$$
\frac{dU}{dt} = -\frac{i }{\hbar} H U.
$$
This equation is solved by assuming the Hamiltonian is piecewise constant over time-slices of $\Delta t$ where the Schrödinger equation has the solution $U(t+\Delta t;t) = \exp(\frac{-i H \Delta t}{\hbar})$. This exponential can be approximated over small time-steps using different algorithms.
Time-steppers are objects that apply the time evolution operator over a small time step. To this end they have a step
member function, which has different parameters depending on the context.
The general interface for time-steppers is,
makeTimeStepper(...); // create timestepper with varying dt, different interfaces.
makeFixedTimeStepper(...); // create timestepper with fixed dt, different interfaces.
.step(...); // time-evolve over small time step, different interfaces.
.state(); // access the current value of the time-evolved state.
.reset(...); // reset the algorithm, different interfaces.
In general, there are four different cases for the use of the time-stepper, varying vs. fixed $\Delta t$, and constant vs. dynamic Hamiltonians. Fixed $\Delta t$ allows for optimizations and it should be used whenever possible. In a similar manner constant Hamiltonians allow for additional optimizations. For dynamic Hamiltonians, we generally approximate the general solution with the expression $U(t+\Delta t;t) \approx \exp(-\frac{i}{\hbar} \int_t^{t+ \Delta t} H(t') dt' )$, which we solve with some form of midpoint-rule. This gives four make functions,
makeFixedTimeStepper(Alg, psi0); // constant Hamiltonian, fixed stepper.
makeTimeStepper(Alg, psi0); // constant Hamiltonian, non-fixed stepper.
makeFixedTimeStepper(Alg, psi, initialParams); // dynamic Hamiltonian, fixed timestepper.
makeTimeStepper(Alg, psi, initialParams); // dynamic Hamiltonian, non-fixed timestepper.
All these functions initialize the starting wave function and the initial parameters for dynamic Hamiltonians. Note that $\Delta t$ for fixed timesteppers is initialized at the creation of the Alg-object representing the algorithm. This object may have additional parameters, depending on the specific algorithm to be used. The algorithms are specific to particular physics and are explained below. These make functions have overloads using default algorithms for different kinds of physics (see default algorithms for spatial and second quantized physics).
Each kind of time-stepper also has its own step interface,
.step(); // constant Hamiltonian, fixed stepper.
.step(dt); // constant Hamiltonian, non-fixed stepper.
.step(to); // dynamic Hamiltonian, fixed timestepper.
.step(dt, to); // dynamic Hhamiltonian, non-fixed timestepper.
These functions time-evolve the initial parameter from the current parameter to the next over a time-interval of length $\Delta t$. The to-parameters and initialParams are Tuples of whatever the dynamic Hamiltonian is callable with. Dynamic Hamiltonians are represented by either spatial Hamiltonian functions or second quantized operator functions.
In addition, there is a cstep-function, which is useful for stepping dynamic systems without updating the Hamiltonian,
.cstep(); // fixed dt.
.cstep(dt); // non-fixed dt.
For steppers with static Hamiltonians, cstep is identical to step.
Finally, there are two different interfaces for the reset-function, depending on whether the Hamiltonian is static or dynamic,
.reset(psi0); // static Hamiltonian resets the to psi0.
.reset(psi0, initialParams); // dynamic Hamiltonian resets to psi0 H(initialParams).
This section of the API discuss the fundamental concepts used in simulating spatial quantum dynamics. This relates to problems with a Schrödinger equation defined on a continuous spatial dimension. To allow numerical simulations, we discretize the spatial dimensions.
The starting point for any one-particle simulation is setting up the associate Hilbert space,
one_particle::makeHilbertSpace(real xLower, real xUpper,
count_t dimension, real kinematicFactor = 0.5);
This creates a numerical grid from xLower to xUpper with dimension steps. As a user you must specific a sufficiently large and finely spaced grid to ensure a proper simulation. The kinematicFactor defines the unit system as discussed below.
The Hilbert space contains the spatial dimension, which is the starting point for any one-particle simulation,
x() -> f_R(x) // returns the numerical grid as a function of x.
spatialDimension() -> f_R(x) // same as x().
The finite-difference approximation to the kinetic part of the Hamiltonian can also be accessed,
T() const -> H_1p // two different names for the same function.
makeKinetic() const -> H_1p
The one-particle Hilbert space also has a number of additional accessors,
xLower() -> real // minimal grid value.
xUpper()-> real // maximal grid value.
dim() -> count_t // number of grid points.
kinematicFactor() -> real // kinematicFactor.
dx() -> real // grid-spacing.
The starting point for any simulation using the one-dimension Gross-Pitaevskii-equation (GPE) is setting up the associate Hilbert space,
gpe::makeHilbertSpace(real xLower, real xUpper,
count_t dimension, real kinematicFactor = 0.5);
This creates a numerical grid from xLower to xUpper with dimension steps. As a user you must specific a sufficiently large and finely spaced grid to ensure a proper simulation. The kinematicFactor defines the unit system as discussed below.
The Hilbert space contains the spatial dimension, which is the starting point for any GPE simulation,
x() -> f_R(x) // returns the numerical grid as a function of x.
spatialDimension() -> f_R(x) // same as x().
The finite-difference approximation to the kinetic part of the Hamiltonian can also be accessed,
T() const -> H_1p // two different names for the same function.
makeKinetic() const -> H_1p
The GPE Hilbert space also has a number of additional accessors,
xLower() -> real // minimal grid value.
xUpper()-> real // maximal grid value.
dim() -> count_t // number of grid points.
kinematicFactor() -> real // kinematicFactor.
dx() -> real // grid-spacing.
Hilbert space for spatial two-particle physics in one-dimension. This Hilbert space is created by calling,
two_particle::makeHilbertSpace(real xLower, real xUpper,
count_t dimension, real kinematicFactor = 0.5);
The two-particle Hilbert space is the product of two one-particle Hilbert spaces: $H_{12} = H_1 \otimes H_2 $, but because these two Hilbert spaces are in fact identical it is possible to perform a number of optimizations. The kinematicFactor defines the unit system as discussed below.
The x-dimensions of the subspaces,
x1() -> f_R(x1) // numerical grid with the first subspace as a function of x.
spatialDimension1() -> f_R(x1) // same as above.
x2() -> f_R(x2) // numerical grid with second subspace as a function of x.
spatialDimension2() -> f_R(x2) // same as above.
The kinetic Hamiltonians in the subspaces,
T1() const -> H_1p(x1)
makeKinetic1() const -> H_1p(x1)
T2() const -> H_1p(x2)
makeKinetic2() const -> H_1p(x2)
In general, the Schrödinger equation for two particles contains a term for the external potential of the form $V_1(x_1) + V_2(x_2)$. Because we only consider particles on the same spatial dimension with the same potential, we can do some optimizations by instead defining the potential on a "common" x-axis, $V(x_c)$. The x-dimension of this common x-space can be accessed by,
xPotential() -> f_R(xCommon)
spatialDimensionPotential() -> f_R(xCommon)
This dimension is also used to define interactions between the particles.
The kinetic Hamiltonian in the combined Hilbertspace $(T = T_1 \otimes I + I \otimes T_2)$
T() const -> H_2p
makeKinetic() const -> H_2p
The two-particle Hilbert space also has a number of additional assessors,
xLower() -> real // minimal grid value.
xUpper()-> real // maximal grid value.
dim() -> count_t // number of grid points.
kinematicFactor() -> real // kinematicFactor.
dx() -> real // grid-spacing.
The kinematicFactor defines the unit system used in the simulation. This section is taken from the QEngine article. The regular form of the time-dependent Schrödinger equation is $$ i \frac{\partial \psi}{\partial t} = -\frac{\hbar^2}{2m} \frac{\partial^2 \psi}{\partial x^2} + V(x)\psi. $$ In order to ease the simulation this equation must be reformulated in rescaled units. This is achieved using a process known as nondimensionalization where quantities in SI-units are written in product form e.g. $a$ becomes $a=\alpha \tilde{a}$ where $\alpha$ carries both the dimension of $a$ and a magnitude while $\tilde a$ is a non-dimensional scaling value. Introducing nondimensionalized units, $$ x = \chi \tilde x, \qquad t=\tau \tilde t,\qquad V=\epsilon \tilde V, \qquad \psi = \xi \tilde\psi. $$ For instance $\chi$ may be on the order of $\mu \text{m}$. The kinematicFactor is then defined as $\kappa = \tau \hbar/2 m \chi^2$. If these choices are substituted into the time-dependent Schrödinger equation one obtains the dimensionless form, $$ i \frac{\partial \tilde \psi}{\partial \tilde t} = -\kappa \frac{\partial^2 \tilde \psi}{\partial \tilde x^2} + V(\tilde x)\tilde \psi. $$ The default choice is $\kappa=0.5$, which is equivalent to $\hbar=m=1$.
A central type for spatial simulations in the QEngine is discretized functions of x $(f(x))$ belonging to a particular Hilbert space. Fundamentally, this is a vector containing the value of the function for each point in the x-grid. It can describe both real and complex functions, but cannot change between the two. Real and complex f(x) are used to describe wavefunctions, while real f(x) are used to describe potentials. Any operations only work with entities living on the same Hilbert space.
Basic Pointwise Arithmetic: (element by element): +, -, *, / . This works with functions of x and scalars.
Norm The norm is the integral over all space.
.norm() -> real // integrates the function of x over the entire Hilbert space.
norm(f) -> real // same as above.
.normalize() // makes f.norm() == 1.
normalize(f) // same as .normalize.
Note, that these functions are always defined on a grid so the normalization condition is $$ \sum_{x=a}^b |f(x)|^2 \Delta x = 1, $$ where $a$ and $b$ are the minimal and maximal grid values. This carries a factor of $\Delta x$ so f.norm() != f.vec().norm().
Overlap: $\langle f_1 | f_2 \rangle= \int f_1^{*} f_2 dx$,
overlap(f1,f2) -> complex
Fidelity: $|\langle f_1|f_2 \rangle|^2$,
fidelity(f1,f2) -> real
Infidelity: $1-|\langle f_1|f_2 \rangle|^2$,
infidelity(f1, f2) -> real
Element-Wise Functions
Additional functions:
integrate(f) -> real/complex // integration.
min(f) -> real/complex // entry with smallest norm.
max(f) -> real/complex // entry with largest norm.
mean(f) -> real/complex // mean value.
.absSquare() -> RVec // absolute square.
absSquare(f) // same as above.
.re() -> real/complex // RVec real part.
.im(f) -> real/complex // RVec imaginary part.
It is also possible to get the underlying vector directly,
.vec() -> Vec & // returns reference to underlying vector.
There is also a number of specialized functions for creating functions of x, which are useful for defining potentials.
step(x, real stepPos);
box(x, real left, real right);
- Well function: $\text{Well}(x,x_l,x_r)=\Theta(x_l-x)-\Theta(x-x_r)$.
well(x, real left, real right);
sstep(x, real left, real hardness);
It is also possible to instantiate a function of x directly from vectors
makeComplexFunctionOfX(Hilbertspace, RVec/CVec) -> f_C(x)
makeRealFunctionOfX(Hilbertspace, RVec) -> f_R(x)
Another option is to create load from a datacontainer.
makeComplexFunctionOfX(Hilbertspace, DataContainer["varname"]) -> f_C(x)
makeRealFunctionOfX(Hilbertspace, DataContainer["varname"]) -> f_R(x)
Potential functions represent time-dependent potentials. They are created by,
makePotentialFunction(Callable(InitialParams...) -> potential, InitialParams...);
The first argument must be callable object that returns a potential and takes InitialParams as argument. It is convient to create callables in C++11 using lambda expressions. An example is,
auto V_func = [&x](real omega, real x0)
{
auto x_displaced = x-x0;
return 0.5*omega*omega*(x_displaced*x_displaced);
};
auto V = makePotentialFunction(V_func,5.0,0.0);
The initial values are necessary to deduce which types the function can be called with, so it is important to explicitly use floating-point numbers (5.0 instead of 5). To be safe, one can be explicit and write,
auto V = makePotentialFunction(V_func,real(5),real(0));
Potential functions are callable with the arguments put into them and also with Tuples of them. If all arguments are reals the function is callable with RVec,
auto V = makePotentialFunction(V_func, real(5.0), real(0.0));
V(5.0, 0.0);
V(makeTuple(5.0,0.0));
V(RVec{5.0,0.0}); // must have correct length.
This type of object represents a Hamiltonian $H = T + V$. It acts as an operator on f(x)'s within the same Hilbert space. A one-particle Hamiltonian is not created from make-functions or constructors but only through members of the one-particle Hilbert space and arithmetic.
auto hilbertspace = one_particle::makeHilbertspace(-1,1,128);
auto T = hilbertspace.T();
auto x = hilbertspace.x();
auto V = x*x;
auto H = T+V;
Parameter-dependent Potentials can be constructed using makePotentialFunction, resulting in hamiltonian functions.
Diagonalization
A one-particle Hamiltonian is an one-particle operator so the eigenfunctions can be found using a spectrum (see operator for details):
makeSpectrum(count_t largestEigenvalue) -> spatial spectrum
This is the non-linear Hamiltonian in from the Gross-Pitaevskii equation $H = T + V + \beta |\psi|^2$. it also acts as operator on f(x)'s from the same Hilbert space. This type of Hamiltonians must be constructed using the GPE term that represents the nonlinearity $\beta |\psi|^2$,
auto gpeTerm = GpeTerm{real beta};
As for the other Hamiltonians the GPE Hamiltonian must be created from the Hilbert space combined arithmetic. As an example,
auto hilbertspace = gpe::makeHilbertspace(-1,1,128);
auto T = hilbertspace.T();
auto x = hilbertspace.x();
auto V = x*x;
auto beta = 0.1;
auto H = T+V+GpeTerm{beta};
Parameter-dependent Potentials can be constructed using makePotentialFunction, resulting in hamiltonian functions.
Diagonalizing non-linear operators is quite a challenging and it requires more specialized algorithms with a number of fudge-factors. The QEngine implements an optimal damping algorithm, which has the interface,
makeSpectrum(count_t largestEigenvalue,
GROUNDSTATE_ALGORITHM groundstate_algorithm =
GROUNDSTATE_ALGORITHM::OPTIMAL_DAMPING,
real convergenceCriterion = 1e-8,
count_t maxIterationsPerEigenstate = 12000,
std::vector<real> mixingValues = { 1e-2, 1e-3, 1e-3, ... , 0.5*1e-7 })
-> spatial spectrum
The default-values for the fudge-factors are somewhat arbitary and they are chosen mostly because they work for the use-cases in the example-programs. We highly recommend to check that your states are true eigenfunctions by checking if they have a low energy variance.
This object represents the Hamiltonian $$ H = T_1 \otimes I + I \otimes T_2 + V(x_1) \otimes I + I \otimes V(x_2) + \delta(x_1-x_2) \cdot V_i(x), $$ which is a little verbose but accurate. There is both a common shared potential and a point-interaction with space-dependent strength. The shared potential is explained in the section on the Two-particle Hilbert space. The point-interaction is a function of x on a special x-space for point-interactions. It is created through,
makePointInteraction(f(xPotential))
It is not compatible with other spaces, although it all the same functionality as the other function of x. So while this is possible,
auto x_common = hilbertspace.xPotential();
auto V = x_common*x_common;
auto V_interaction = makePointInteraction(0.1*V);
auto V_interaction2 = 2*V_interaction*V_interaction;
this is not:
auto x_common = hilbertspace.xPotential();
auto V = x_common*x_common;
auto V_interaction = makePointInteraction(0.1*V);
// error: trying to multiply functionsOfX from different hilbertspace.
auto V_interaction2 = 2*V_interaction*V;
Note, that potentials and pointinteractions can be summed, but the resulting object can only be added to a Hamiltonian or additional potential terms. This is useful in the case of potential functions,
auto V_func = [xCommon](auto a)
{
auto V_ext = a*xCommon*xCommon;
auto V_int = makePointInteraction(0.1*V);
return V_ext+V_int
}
As for the other Hamiltonians, a Two-Particle Hamiltonian must be created from the Hilbert space together with arithmetic. An example is,
auto hilbertspace = two_particle::makeHilbertspace(-1,1,128);
auto T = hilbertspace.T();
auto x = hilbertspace.xPotential();
auto V = x*x;
auto V_int = two_particle::makePointInteraction(-0.1*V);
auto H = T+V+V_int;
Here, the two particles are confined in a harmonic trap but the particles also have a repulsion.
A two-particle Hamiltonian can be diagonalized resulting in a spectrum,
makeSpectrum(count_t largestEigenvalue,
count_t nExtraStates = 0,
real tolerance = 0)
-> spatial spectrum
The two-particle Hamiltonian is represented by a general sparse matrix, which uses a different algorithm for diagonalization than the single-particle Hamiltonian. Often, it will be necessary to calculate more eigenvalues/eigenstates than are actually needed to get a correct result. You can always verify if your state is a true eigenstate by calculating the energy variance. The value nExtraStates allows for this. It can be considered a fudge-factor, and will not affect the number of eigenstates in the generated spectrum. Raising the tolerance may give faster results, but with less accuracy. It is normally not recommended.
The entities belonging to the subspaces of the two-particle Hilbert space can be used individually to perform regular one-particle physics. For instance,
auto hilbertspace = two_particle::makeHilbertspace(-1,1,128);
auto T1 = hilbertspace.T1();
auto x1 = hilbertspace.x1();
auto V1 = 0.5*x1*x1;
auto H1 = T1+V1;
auto psi1 = makeWavefunction(H1[0]+H1[1], NORMALIZE);
They can also be combined to create two-particle wave functions, where tensor-products are done implicitly,
auto hilbertspace = two_particle::makeHilbertspace(-1,1,128);
...
auto psi1 = makeWavefunction(H1[0]+H1[1], NORMALIZE);
auto psi2 = makeWavefunction(H2[0]+H2[1], NORMALIZE);
auto psi = psi1*psi2;
auto psi_a = psi2+psi1
In both cases we create new functions such that respectively $\psi(x_1, x_2) = \psi_1(x_1)\cdot \psi_2(x_2)$ and $\psi_a(x_1, x_2) = \psi_1(x_1)+\psi_2(x_2)$. Note, that the order of elements is irrelevant, it only matters which Hilbert space they come from. The resulting objects are still functions of x just defined on the combined Hilbert space, meaning that all functions, such as overlap, still work.
It is even possible to create two-dimensional Hamiltonians by summing two single-particle Hamiltonians,
auto hilbertspace = two_particle::makeHilbertspace(-1,1,128);
...
auto H_2D = H1 + H2;
The resulting Hamiltonian can be diagonalized and applied to two-dimensional wave functions, but it cannot currently be used in time evolution. Results of the diagonalization can be used in two-particle simulation, though.
Hamiltonian functions are constructed by adding a potential function to a static Hamiltonian. They are callable the in same way as potential functions but return Hamiltonians. They take their initial values from the potential functions. This allows for creating time-dependent Hamiltonians. As an example,
auto hilbertspace = two_particle::makeHilbertspace(-1,1,128);
auto T = hilbertspace.T();
auto x = hilbertspace.xPotential();
auto V_func = [x](real t){
auto x_displaced = x-sin(t);
auto V_ext = 0.5*omega*omega*(x_displaced*x_displaced);
auto V_int = makePointInteraction(0.5*t*t*x*x);
return V_spat + V_int;
};
auto V = makePotentialFunction(V_func,real(0.0));
auto H = T+V;
The example creates a Hamiltonian with an oscillating external harmonic oscillator and an interaction that is stronger the further the particles are from the trap center.
This type is returned from the makeSpectrum functions for spatial operators. It is a collection of the N lowest eigenvalues and eigenfunctions of the operator. Here, we use the terminology eigenfunction to make it clear that these eigenstates are functions of x and not vectors. It has the following member functions,
.eigenvalue(count_t i) -> complex // get the i'th eigenvalue.
.eigenFunction(count_t index) -> f_C(x) // get the i'th eigenfunction.
// index of the highest calculated eigenstate and -function (N-1).
.largestEigenstate() -> count_t
It is also possible to build linear combinations of eigenfunctions $f(x) = \sum_{n=0}^{N} c_n f_n(x)$,
.makeLinearCombination(CVec c) -> f_C(x)
The general algorithm for time evolving spatial systems is the splitstep Fourier algorithm. Similarly to timesteppers there are four different interfaces for creation,
splitstep(H); // constant hamiltonian, non-fixed dt.
splitstep(H, dt); // constant hamiltonian, fixed dt.
splitstep(H, initialParams); // dynamic hamiltonian, non-fixed dt.
splitstep(H, dt, initialParams); // dynamic hamiltonian, fixed dt.
These function work with all three spatial Hamiltonians (One-Particle, Two-Particle, and GPE), and can be passed into the corresponding makeTimeStepper/makeFixedTimeStepper-function.
The splitstep-algorithm can also be modified in three ways,
normalizing(splitstep(...));
Renormalizing is similar to normalize but it normalizes to the value before the step and not to unit normalization,
renormalizing(splitstep(...));
Adding imaginary boundaries to the simulation. Practically, the wave function will be multiplied with $\exp(V |dt|)$, which creates a decay in the regions with a non-zero and negative imaginary potential. This can be used to create absorbing boundary conditions instead of the default circular boundaries from the split-step Fourier algorithm.
addImagBounds(splitstep(...), potential);
addImagBounds(splitstep(...), potential, dt); // only fixed timesteppers.
The modifiers can also be chained,
normalizing(addImagBounds(splitstep(...), potential));
which will normalize after applying the imaginary potential. An example for a full non-default timestepper is,
auto hs = one_particle::makeHilbertSpace(-1,1,128);
auto V = makePotentialFunction([x = hs.x()](real x0)
{
return 0.5*(x-x0)*(x-x0); // driven harmonic oscillator.
}, 0.0);
auto H = hs.T() + V;
auto H0 = H(0.0);
auto psi0 = makeWavefunction(H0[0], NORMALIZE);
auto dt = 0.001;
auto initParams = makeTuple(0.0);
// must be negative to induce decay. Cannot be imaginary or complex despite the name.
auto eatingBounds = -5*pow(x,8);
auto stepper = makeFixedTimeStepper(
normalizing(addImagBounds(splistep(H, dt, initParams), eatingBounds, dt)),
psi0,
initParams);
for(auto i = 1u; i<=1000; ++i)
{
// shake the harmonic oscillator back and forth sinusoidally.
auto x0_to = std::sin(i*dt);
// tuple is created implicitly from initializer list.
stepper.step({x0_to});
}
The default algorithm for all three spatial physics is,
addImagBounds(splitstep(...), defaultImagPot);
The default boundary-potential is zero in the center 3/4 of the x-space, then gets stronger as $x^2$. It is continuous and once differentiable. This gives the following overloads for timeSteppers using spatial physics,
// dynamic Hamiltonian
makeFixedTimeStepper(H, psi0, dt, Tuple initialParams);
makeTimeStepper(H, psi0, Tuple initialParams);
// dynamic Hamiltonian, only if parameters are all real numbers
makeFixedTimeStepper(H, psi0, dt, RVec initialParams);
makeTimeStepper(H, psi0, RVec initialParams);
// constant or dynamic Hamiltonian.
// Initial parameters for dynamic Hamiltonian taken from hamiltonianfunction initialparams
makeFixedTimeStepper(H, psi0, dt);
makeTimeStepper(H, psi0);
Hilbert space for second quantized or simply discretized n-level physics.
n_level::makeHilbertSpace(count_t nLevels);
The dimension of the Hilbert space can be assessed by,
.nLevels() -> count_t
This is a specialized Hilbert space for Bose-Hubbard type physics,
bosehubbard::makeHilbertSpace(count_t nParticles, count_t nSites);
Site-hopping operator in Bose-Hubbard Hamiltonian $\sum_i ai^\dagger a{i+1} + ai a{i+1}^\dagger$,
.makeHoppingOperator(bool usePeriodicBoundaryConditions = true) -> Operator;
// Same using a dense matrix. Only useful for very small simulations (<5 particles and sites).
.makeDenseHoppingOperator(bool usePeriodicBoundaryConditions = true);
On-site interaction operator in Bose-Hubbard Hamiltonian, $\sum_i n_i(n_i-1)$, where $n_i$ is the number-operator for site $i$,
.makeOnSiteOperator() -> Operator
Transform a vector describing the on-site potential so that it can be used in a Bose-Hubbard simulation,
.transformPotential(RVec potentialAtSites) -> Operator
Here the vector must have length nSites, where each entry describes the value of the potential at this site.
Make fundamental states of the Bose-Hubbard simulation
.makeMott() -> State
.makeSuperFluid() -> State
Build single-particle density matrix for a state,
.singleParticleDensityMatrix(State) -> CMat
Unlike spatial simulations, second quantized states are inherently discrete. This makes the second quantized state more or less a wrapper for Vector.
A second quantized state can be initialized directly from a vector or an initializer list. It is also possible to use a Fock State,
makeState(HilbertSpace, Vector, shouldNormalize = NORMALIZE_NOT);
makeState(HilbertSpace, {reals/complexs...}, shouldNormalize = NORMALIZE_NOT);
makeState(HilbertSpace, FockState, shouldNormalize = NORMALIZE_NOT);
.norm() -> real // L2 norm of the state, no dx factor here.
norm(f) -> real // same as above.
.normalize(); // makes f.norm() == 1.
normalize(f); // same as above.
.at() -> complex & // can be used to both get and set data.
.vec() -> CVec & // underlying vector.
.sum() -> complex // sum over a state.
sum(State) -> complex
.conj() -> State // complex conjugate.
conj(State) -> State
.absSquare() -> RVec // norm square.
absSquare(f) -> RVec
.re()/.im() -> RVec // real and imaginary parts.
re(f)/im(f) -> RVec
overlap(f1,f2) -> complex
fidelity(f1,f2) -> real
infidelity(f1, f2) -> real
Unlike spatial Hamiltonians, second quantized operators do not have a fixed form, which gives larger freedom for their definition.
Creation generally happens through the makeOperator-function, but the possibilities are slightly different for the two different types of physics.
The n-level simulation is designed for Hilbert spaces with a low dimension. Operators are expected to be constructed directly from matrices.
makeOperator(hilbertspace, RMat/CMat); // using matrix.
makeOperator(hilbertspace, {{...},{...},...}); // using initializer-list for matrix.
// using generalized Fock-operator applied to the Hilbert space.
makeOperator(hilbertspace, FockOperator);
In addition, there are four factory-functions for the Pauli-operators, which are only useful for two-level systems,
makePauliI(); // 2x2 identity.
makePauliX(); // {{0, 1}{1, 0}}.
makePauliY(); // {{0, -i}{i, 0}}.
makePauliZ(); // {{1, 0}{0, -1}}.
The simplicity of these types allows their Hilbert space to be inferred, which is an n-level Hilbert space with $n=2$.
The Bose-Hubbard Hilbert space is a special case, since the states are ordered for a more efficient representation. Due to this specialization the only operators provided are those for building the Bose-Hubbard Hamiltonian.
Additional operators are possible to create, but due to the large size of Bose-Hubbard Hilbert spaces it does not make sense to create them directly from matrices. Creation from Fock operators is possible but not very efficient,
makeOperator(hilbertspace, FockOperator);
makeDenseOperator(hilbertspace, FockOperator);
Bose-Hubbard operators are generally represented by sparse matrices. For small systems, dense operators are more efficiently created by the makeDenseOperator function.
Due to the sparsity, makeSpectrum has two extra parameters,
.makeSpectrum(count_t nEigenStates, count_t extraStates = 0, real tol = 0.0);
Generally, if the diagonalization fails then extraStates should be increased, typically with several states. This will not increase the number of states in the resulting spectrum.
The same parameters are also added to the makeState-function,
makeState(LinearCombinationOfEigenstates, normalize = NORMALIZE_NOT,
extraStates = 0, tol = 0.0);
makeOperatorFunction(Callable, initParams...);
This creates a callable object returning an operator, which is initialized with initParams
.
This type is returned from the makeSpectrum
functions second quantized operators. This is a collection of the N lowest eigenvalues and eigenstates of the operator. It has the following member functions,
.eigenvalue(count_t i) -> complex // get the i'th eigenvalue.
.eigenState(count_t index) -> State // get the i'th eigenfunction.
// index of the highest calculated eigenstate and eigenstate (N-1).
.largestEigenstate() -> count_t
It is also possible to build linear combinations of eigenstates $f(x) = \sum_{n=0}^{N} c_n f_n(x)$,
.makeLinearCombination(CVec c) -> State
There are two time evolution algorithms for second quantized states: Direct Exponentiation and the Krylov-Lanczos method.
Direction exponentiation is the simplest form of time evolution since it directly computes $\exp(-\frac{i}{2\hbar}(H(\text{from})+H(\text{to})) dt)$. The interface is,
directExponentiation(H); // operator Hamiltonian non-fixed dt.
directExponentiation(H, dt); // operator Hamiltonian fixed dt.
directExponentiation(H, Tuple initialParams); // operatorFunction Hamiltonian, non-fixed dt.
directExponentiation(H, dt, Tuple initialParams); // operatorFunction Hamiltonian, fixed dt.
As exponentiating a matrix is an expensive operation, this is only recommended for small systems.
The second algorithm is the Krylov algorithm, which uses a Krylov subspace method to approximate the matrix exponential. The interface is:
krylov(H, count_t krylovOrder); // operator as Hamiltonian, non-fixed dt.
krylov(H, count_t krylovOrder, dt); // operator as Hamiltonian, fixed dt.
krylov(H, count_t krylovOrder, Tuple initParams ); // operatorFunction as Hamiltonian, non-fixed dt.
krylov(H, count_t krylovOrder, dt, Tuple initParams); // operatorFunction as Hamiltonian, fixed dt.
The krylovOrder defines how many basis-states are used in the Krylov subspace method. It is a problem-dependent parameter, but five is usually a good starting point.
For n-level the default is direct exponentiation,
// dynamic Hamiltonian
makeFixedTimeStepper(H, psi0, dt, Tuple initialParams);
makeTimeStepper(H, psi0, Tuple initialParams);
// dynamic Hamiltonian, only if parameters are all real numbers
makeFixedTimeStepper(H, psi0, dt, RVec initialParams);
makeTimeStepper(H, psi0, RVec initialParams);
// constant or dynamic Hamiltonian
// Initial parameters for dynamic hamiltonian taken from hamiltonianfunction initialparams
makeFixedTimeStepper(H, psi0, dt);
makeTimeStepper(H, psi0);
For Bose-Hubbard the default is Krylov. As Krylov requires an additional parameter, the default call to makeTimeStepper is slightly different for Bose-Hubbard systems,
// dynamic Hamiltonian
makeFixedTimeStepper(H, psi0, count_t krylovOrder, dt, Tuple initialParams);
makeTimeStepper(H, psi0, count_t krylovOrder, Tuple initialParams);
// dynamic Hamiltonian, only if parameters are all real numbers
makeFixedTimeStepper(H, psi0, count_t krylovOrder, dt, RVec initialParams);
makeTimeStepper(H, psi0, count_t krylovOrder, RVec initialParams);
// constant or dynamic Hamiltonian.
// Initial parameters for dynamic Hamiltonian taken from hamiltonianfunction initialparams
makeFixedTimeStepper(H, psi0, count_t krylovOrder, dt);
makeTimeStepper(H, psi0, count_t krylovOrder);
For second quantized problems it often makes sense to consider abstract, Hilbert space-free states and operators in an abstract fock-basis. These can be created and manipulated before being applied to an actual physical context. The fock-state can be created from:
fock::state{count_t... quanta};
So e.g. the state $\left| 0, 3, 1 \right\rangle$ can be created from
auto state = fock::state{0,3,1};
These fock-states allow addition and subtraction, and multiplication with scalars. This allows us to create linear combinations of fock-states:
auto state = fock::state{1,1,1} + 1.5*fock::state{0,1,3}+3.2i*fock::state{1,2,1};
The fock-states can be used with the makeState-function to create states for second quantized Hilbert spaces:
auto fockstate = fock::state{1,1,1};
auto state = makeState(hilbertspace, fockstate);
Finally, such states can be compared using direct comparison operators (== and !=) and using the overlap-function.
In addition to the states, we have operators that act on them:
auto create = fock::c(i);
auto state = fock::c(1)*fock::state{0,0,0}; // same as fock::state{0,1,0}
auto annihilate = fock::a(i);
auto state = fock::a(1)*fock::state{0,1,0}; // same as fock::state{0,0,0}
auto count = fock::n(i);
auto state = fock::n(1)*fock::state{0,2,0}; // same as 2*fock::state{0,2,0}
auto zero = fock::zero();
auto state = fock::zero()*fock::state{0,1,3}; // same as fock::state{}
auto identity = fock::I();
auto state = identity*fock::state{0,1,3}; // same as fock::state{0,1,3}
These operators can be added, subtracted, and multiplied (though not divided) with each other and scalars. This allows construction of complex operators from more simple ones, which can then be applied to a Hilbert space to create a matrix representation,
using namespace fock;
auto fockop = c(0)*a(1)+a(0)*c(1);
auto op = makeOperator(hilbertSpace, fockop);
In addition, fock-operators work with a pow-function:
auto op = pow(fock::c(1), 4); // create 4 quanta in site 1
Note, that fock-operators do not currently allow the functions expectationvalue, variance, or standardDeviation.
The datacontainer is a class to store, save, and load data. It can store basic data types scalars, vectors, matrices, as well as strings and std::vectors of any of these types. It can save to json-format and .mat-format if built with matio. Creation:
DataContainer(); // creates empty datacontainer.
The DataContainer uses map-syntax to set and get values,
dataContainer["vec"] = vec;
vec = dataContainer["vec"];
If the field "vec" does not exist in the datacontainer then it is automatically created. It is also possible to append to existing fields,
dataContainer["vec"].append(val);
If val is a function of x then it is necessary to get the underlying vector by calling .vec()
. As an example,
dataContainer["psi"].append(psi.vec());
When appending a scalar to a vector it will place it at the end. When appending a vector to a matrix (or a vector) with the same number of rows, it will add the vector as a column. In all other cases, it will append it to what corresponds to a cell-array in MATLAB. The data will also be saved to cell-arrays when saving to mat-file.
It is also possible to add strings to the datacontainer using the syntax,
dataContainer["string"] = std::string("exampleString");
Besides assignment the DataContainer can return serialized data in three ways,
dataContainer.getReal(std::string) -> real
dataContainer.getRVec(std::string) -> RVec
dataContainer.getRMat(std::string) -> RMat
dataContainer.getComplex(std::string) -> complex
dataContainer.getCVec(std::string) -> CVec
dataContainer.getCMat(std::string) -> CMat
auto vec = dataContainer["vec"].get<RVec>();
The RVec in this example can be replaced with std::string, real, complex, CVec, RMat, or CMat. It can also be a std::vector
auto real = real(dataContainer["real"]);
This currently only works for real, complex, and std::string.
DataContainers can save/load all their data to/from .json-format or .mat-format when the QEngine is built with matio. The syntax for save/load is,
dataContainer.save("filename.json");
dataContainer.save("filename.mat");
dataContainer.save("filename", "json");
dataContainer.save("filename", "mat");
dataContainer.save("filename"); // defaults to json.
dataContainer.load("filename.json");
dataContainer.load("filename.mat");
dataContainer.load("filename", "json");
dataContainer.load("filename", "mat");
dataContainer.load("filename"); // defaults to json.
When loading, any existing fields in the datacontainer are cleared. When saving, an existing file is overwritten. Currently, we do not support appending to a file.
A few more additional member functions are,
.hasVar(std::string varname) -> bool; // check if particular variable exists in datacontainer.
.nVars() -> count_t; // check number of vars in datacontainer.
.clear(); // empty the datacontainer.
// returns list of all variable names stored in the datacontainer.
.varNames() -> std::vector<std::string>
The Control class represents a discretized multi-valued function of time $\mathbf{u}(t)$. It is used to represent the physical parameters that control a system. A multi-valued control is used for scenarios with multiple control functions, and paramCount is the number of controls.
This section describes different functions for creating controls.
makeControl(std::vector<RVec> params, real dt);
Each entry in the std::vector is one of the control parameters as functions of time.
makeLinearControl(real from, real to, count_t size, real dt);
This creates a linear control [from, to] with size elements, time step dt, and paramCount 1.
makePieceWiseLinearControl(std::vector<std::pair<count_t, real>> fixedPoints, real dt);
Create a piece-wise linear control with turning-points given as (index, value) using linear interpolation.
Control::zeros(count_t size, count_t paramCount, real dt); // zero-filled control
It is also possible to create controls using matematical functions similarly to a function of x.
makeTimeControl(count_t size, real dt, real t0 = 0.0);
Creates a linspace-control from $t_0$ in size-steps of size dt and paramCount 1. This is useful in combination with mathematical expressions. There is also a number of specialized functions for creating functions of x, which are useful for defining potentials.
step(Control t, real stepPos);
box(Control t, real left, real right);
well(Control t, real left, real right);
It is also possible to replace these functions with softbounds versions where the step function is replaced by a smooth sigmoid $\Theta_s=\frac{1}{1+\exp(-h (t-t_0))}$ where h is the hardness parameter. For $h=0$ there is no jump and the jump becomes discontinuous as $h$ approaches infinity.
sstep(Control t, real left, real hardness);
Controls support a number of standard mathematical operations.
means(Control) -> RVec // means of individual params.
mean(Control) -> real // mean of means.
abs(Control) -> Control
maxs(Control) -> RVec // maximum of individual params.
max(Control) -> real // max of maxs.
mins(Control) -> RVec // minimum of individual params.
min(Control) -> real // min of mins.
pow(Control, count_t exponent);
Like Vector controls can be glued and appended. Append will simply place the back after the front, while glue will do the same, but ignore the first element of the back, asserting that the first element of the back is the same as the last element of the front.
.append(Control back);
append(Control front, Control back) -> Control;
.glue(Control back);
glue(Control front, Control back) -> Control;
invert(Control) -> Control; // flips the control in time.
Controls can also be combined parameter-by-parameters. This will create a control that has all the parameters of the individual controls.
makeControl(std::vector<Control> controls);
This allows
auto param1 = sin(t);
auto param2 = cos(t);
auto u = makeControl(std:;vector<Control>{param1, param2});
It is also possible to compute the discretized $L^2$ and $H^1$ norms of controls. These are given as $$ L^2 : || f || = \sum_i |f_i|^2 dt $$ $$ H^1 : || f || = \sum_i |\dot{f}_i|^2 dt $$ The $H^1$-norm is approximated by sum(diff(f))/dt, where diff is the differences between neighbouring elements in the vector describing f.
// L2-norms per control parameter
.normsL2() -> RVec;
normsL2(Control) -> RVec;
// sum of L2-norms per control parameter
.normL2() -> real;
normL2(Control) -> real;
// H1-norms per control parameter
.normsH1() -> RVec;
normsH1(Control) -> RVec;
// sum of H1-norms per control parameter
.normH1() -> real;
normH1(Control) -> real;
.get(count_t n) -> RVec
.set(count_t n, RVec);
.getFront() -> RVec
.setFront(RVec);
.getBack() -> RVec
.setBack(RVec);
.at(count_t index) -> ExpressionTemplate
This accessor returns an expression-template, that can be used similar to a RVec-reference. However, it does not work well with auto.
.mat() -> RMat&
Different parameters are the rows and different time-points are the columns.
A control also has additional information known as metaData,
.dt() -> real // value of the time step used in this control.
.setDt(real dt); // redefine the time step.
.size() -> count_t // number of time-points.
.paramCount() -> count_t // number of different control parameters.
.metaData() -> ControlMetadata // struct with members paramCount, controlSize, and dt.
// value of time at the last point of the control.
.endTime() -> real
The power of creating controls comes when all the above functionality is combined. Starting with a time-control one can create controls in a mathematical manner,
auto t = makeTimeControl(101, 0.01); // 0.0 to 1.0 in 100 steps.
auto t1 = makeTimeControl(51, 0.01, 0.0); // first half of t.
auto t2 = makeTimeControl(51, 0.01, t1.endTime()); // second half of t.
// simple, smooth mathematical functions of time:
auto p1 = 3.0*sin(PI*t);
// piecewise by gluing the parts together
auto p2 = glue(cos(PI*t1), invert(cos(PI*t1)));
// also piecewise defined, with step-functions. Boxes and wells can also be used for this
auto p3 = (1-step(t, 0.5))*tan(PI*t) + step(t, 0.5)* tanh(PI*t);
auto p4 = makePieceWiseLinearControl({{0, 0.0},{23, 0.5},{76, 0.6}, {100, 0.0}}, t.dt());
auto u = makeControl({p1,p2,p3,p4}); // combine the parameters.
For some algorithms we do not optimize directly on the control, but parametrize it in a basis of smooth functions as $u(t) = u_0(t) + \sum_m c_m f_m(t)$ where $c_m$ are the basis coefficients. Instead of optimizing the full control, we optimize the basis coefficients, which are contained in a BasisControl class. The basis functions $f_m(t)$ are controls collected in a basis object. A basis control has a number of member functions,
.size() -> count_t // number of time steps.
.paramCount() -> count_t // number of controls.
.mat() -> RMat & // get underlying matrix.
dot(BasisControl, BasisControl) -> real // sum of point-wise products of the two controls.
// Same as .at for control, which can be used as RVec but not auto.
.at(count_t index) -> Expression template
Multiplying a basis with a control evalutes the expression $\sum_m c_m f_m(t)$ without $u_0$,
BasisControl*Basis -> Control
A Basis is a collection of controls. This can be used for parametrizing the control as, $$ u(t) = u_0(t) + \sum_m c_m f_m(t). $$ Here ${f_m(t)}$ is the set of basis controls. A Basis can turn a BasisControl into a regular control. A Basis is constructed from a list of controls,
Basis{std::vector<Control>};
Basis{Control1, Control2, ...};
All controls must have the same size, paramCount, and dt. Basically, a basis is only meant to be constructed and then passed into an algorithm, but it does have a few accessors and manipulators,
.size() -> count_t // number of basis-elements.
.controlSize() -> count_t // size of individual controls.
.paramCount() -> count_t // number of control-parameters per control.
.dt() -> real; // time step.
It is possible to perform elementwise multiplication of all elements in the basis with some control.
Basis*=Control;
Control*Basis -> Basis
This can be used to define shape-functions $S(t)$. A shape-function is used to force a general envelope for all basis-functions, $f_m \rightarrow S(t) f_m $
auto t = makeTimeControl(...);
//assume r1 and r2 are real numbers generated in some way
auto u1 = sin(r1*t);
auto u2 = cos(r2*t);
auto shapeFunction = sbox(t, t0, tEnd, 5); // sigmoid box, forces controls to approach zero at both ends
auto basis = shapeFunction*Basis{u1, u2};
Problems connect the physics simulation with the optimization algorithms, so the first step to optimal control is to create an optimization problem. In the current version of the QEngine, this means a state-transfer problem possibly with regularization and/or bounds added by using the operator +,
auto problem = makeStateTransferProblem(...)
+ gamma*makeRegularization(...)
+ sigma*makeBoundaries(...);
The resulting problem will have the same member functions as a vanilla state-transfer problem, but the cost and gradient-functions will be the sum of the costs and gradients from each problem.
The state-transfer problem represents the cost-function, $$ J = \frac{1}{2} \left(1- |\langle \psi_t | \psi(T) \rangle|^2 \right) = \frac{1}{2} \left(1- | \langle \psi_t | U[H(\mathbf{u})] |\psi_0 \rangle|^2 \right), $$ where $\psi_t$ is the target state, $\psi(T)$ is the final state, $\psi_0$ is the initial state, and $U$ is the time evolution operator. Similar to time evolution, there is a generic way of creating state-transfer problems and a set of defaults for each type of physics. The generic syntax is,
makeLinearStateTransferProblem(Alg forwardAlgorithm, Alg backwardAlgorithm,
dHdu, State initialState, State targetState, Control initialControl);
The arguments are as follows,
initialControl: The control that the algorithm will use as a starting point for optimization.
The nonlinear nature of the Gross-Pitaevskii equation causes some complications so it there is a special nonlinear problem,
makeGpeStateTransferProblem(Alg forwardAlgorithm, Alg backwardAlgorithm,
dHdu, State initialState, State targetState, Control initialControl);
The default ways of setting up a state-transfer problem are,
makeStateTransferProblem(Hamiltonian H, dHdu, psi_init, psi_target, initialControl);
makeStateTransferProblem(H, psi_init, psi_target, initialControl);
Here, most arguments are the same, but the time-stepping algorithms are defaulted based on the Hamiltonian. These defaults are the same as the defaults for the time evolution (For spatial physics and second quantized). The exception to this is GPE-physics, where the nonlinearity requires a special backward-stepping algorithm.
In the second function, the dHdu is defaulted to a numerically calculated dHdu.
Before the cost or gradient of the cost function can be calculated the state-transfer problem must first be updated with the current value of the control.
.update(control)
Accessors used for physics and optimal control algorithms,
.fidelity() -> real // The current value of the fidelity.
.cost() -> real // The current value of the cost functional.
.gradient() -> Control // The gradient of the cost functional.
All the quantities are computed for the current value of the control set by .update. The fidelity is between the target state and the final state, which is initial state time evolved from the beginning to the end of the control. The cost and gradient are defined by the problem sum.
Some additional accessors are,
.control() -> Control // the last control to update or the initial control.
.initialControl() -> Control // initial value of the control.
.dt() -> real // time step.
.dHdu() -> dHdu; // Derivative of the Hamiltonian
.nPropagationSteps() -> count_t;
.resetPropogationCount();
The number of propogation steps provides a machine-independent measure of speed.
The gpeAdjointSplitStep is a time evolution algorithm specifically to perform backwards propagation needed for control of the Gross-Pitaevskii equation. Its interface is the same as normal splitstep, but it only works for time-dependent Hamiltonians with fixed dt,
gpeAdointSplitStep(Hamiltonian H, real dt);
The regularization limits how quickly the control can change. The regularization is introduced by adding a term to the cost function that penalizes fast changes $$ J_r = \frac{\gamma}{2} \int_0^T \dot{u}^2 dt. $$ The $\gamma$ front factor determines the strength of the regularization. The interface is,
makeRegularization(Control initialControl, real factor = 1.0)
Note: The regularization is not meant to be used on its own, but it has to be added to a state transfer problem (see problem summing). The Regularization has the same basic member functions a state-transfer problem,
.update(Control newControl);
.cost() -> real;
.gradient() -> Control;
.control() -> Control;
It can also be multiplied with a real number, which multiplies it to the value of $\gamma$ that has initial value 1.0,
auto Reg = makeRegularization(initialControl); // gamma = 1.0.
Reg *= 2; // gamma = 2.0.
Reg = 2*Reg; // gamma = 4.0.
Experimental setups usually have some kind of minimal or maximally allowed control values. For instance there might be some maximally allowed laser intensity. These bounds can be introduced in the QEngine by added a soft boundary term to the cost function, $$ J_b=\frac{\sigma}{2} \int_0^T \Theta(u_a-u)(u-u_a)^2 + \Theta(u-u_b)(u-u_b)^2 dt, $$ where $\Theta$ is the Heaviside step function and the bounds on the control are $u_a \leq u(t) \leq u_b$. The value $\sigma$ sets the hardness of the bounds and it typically has to be on the order of several thousands. The interface is,
makeBoundary(Control initialControl, RVec lowerLeft, RVec upperRight, real factor = 1.0)
Note: The boundary term is not meant to be used on its own, but to be added to a state transfer problem (see problem summing). The boundary term has the same basic member functions as the state-transfer problem:
.update(Control newControl);
.cost() -> real;
.gradient() -> Control;
.control() -> Control;
It can also be multiplied with a real number, which multiplies it to the value of $\sigma$ that has initial value 1.0,
auto Bounds = makeBoundary(initialControl, lowerLeft, upperRight); // sigma = 1.0
Bounds *= 2; // sigma = 2.0
Bounds = 2*Bounds; // sigma = 4.0
To calculate the gradient of a cost-function, we need to differentiate the Hamiltonian with regard to different control parameters. For spatial physics, this reduces to differentiating the potential as the kinetic part of the spatial Hamiltonian is always constant wrt. the control parameters. This differentitation can be performed analytically or numerically. The analytuc result must be supplied by the user,
auto V = makePotentialFunction([x](real u1, real u2){return u1*x-u2;}, 1.0, 0.0);
auto dVdu1 = makePotentialFunction([x](real u1, real u2){return x;}, 1.0, 0.0);
auto dVdu2 = makePotentialFunction([x](real u1, real u2){return -1;}, 1.0, 0.0);
auto dVdu = makeAnalyticDiffPotential(dVdu1, dVdu2);
and similarly for Hamiltonian functions,
auto dHdu1 = makeOperatorFunction(...);
auto dHdu2 = makeOperatorFunction(...);
auto dHdu = makeAnalyticDiffHamiltonian(dHdu1, dHdu2);
It is also possible to calculate this differential numerically using a finite-differences,
auto V = makePotentialFunction([x](real u1, real u2){return u1*x-u2;}, 1.0, 0.0);
auto dVdu = makeNumericDiffPotential(V);
and similarly for Operator functions,
auto H = makeOperatorFunction(...);
auto dHdu = makeNumericDiffHamiltonian(H);
The analytic approach is typically a factor of two faster than using the numeric approximation.
Algorithms are created from make-functions, just as in the rest of the QEngine. Most optimization algorithms have many places where you would like to adjust specific behavior so the algorithms have many hooks to provide you with flexibility. For you convenience, these can be passed into the make-function in any order. See details in hooks. Once an algorithms has been created a call to .optimize()
starts the optimization. This function has no arguments, as all modifications of the algorithm happen through the hooks.
optimize can be called repeatedly, which just continues optimization. This is useful for stopping-criteria that depend on external cues, e.g. button-presses in a real-time program.
The BFGS-algorithm has some internal state, which can be reset by calling
.resetBfgs();
The GRAPE-algorithm applies standard mathematical optimization techniques to optimal control problems, which in this case are steepest descent or BFGS,
makeGrape_steepest_L2(problem, hooks...);
makeGrape_steepest_H1(problem, hooks...);
makeGrape_bfgs_L2(problem, hooks...);
makeGrape_bfgs_H1(problem, hooks...);
Hooks:
The GROUP-algorithm applies the same standard optimization techniques as GRAPE to optimal control problems but it does so in a parametrized basis. So instead of optimizing the control $u(t)$, it optimizes the parameters $c_m$, such that $u(t) = u_0(t) + \sum c_m u_m(t)$ where $u_0$ is the initial control in the problem. The values $c_m$ are contained in a basiscontrol, which are assumed to be initialized to zero implying $u(t)= u_0(t)$ in the first iteration.
makeGroup_steepest(problem, Basis, hooks...);
makeGroup_bfgs(problem, Basis, hooks...);
Hooks:
This is the dressed version of the GROUP-algorithm. Dressing means that the basis for the algorithm is not fixed but it is reshuffled on the fly principally allowing the algorithm to escape artificial traps. The interface is,
makeDGroup_steepest(problem, BasisMaker, hooks...);
makeDGroup_bfgs(problem, BasisMaker, hooks...);
Hooks:
The accessors for the algorithm-objects for GRAPE, GROUP and dGROUP are very similar, which makes it easier to reuse hooks. Most of them are useful when writing stoppers and collectors. The accessors are:
.hookName()
. .problem()
. This is probably the most important accessor. It is not recommended to call .update(...)
on this since it will interfere with the algorithms..stepSize() -> real // stepsize used in the last finished iteration.
.stepDirection() -> Control/BasisControl // stepdirection in the last finished iteration.
.iteration() -> count_t // index of current iteration.
BFGS type algorithms have a couple of extra accessors,
.stepTimesY() -> real;
The inner product of the step and the difference in gradients. This must be non-zero in order to satisfy the curvature condition, which is needed for a guaranteed descent in quasi-Newton methods. If this number is too close to zero the Hessian approximation may have low quality. If it becomes zero, the algorithm will become NaN-infested.
.stepsSinceLastRestart() -> count_t;
This counts number of iterations since the last restart. If it equals zero, the algorithm just restarted at the end of this iteration.
For GROUP and dGROUP, the algorithm is wrapped in another class that slightly changes the interface.
.coefficients() -> BasisControl // current value of the coefficients defining the control.
.basis() -> Basis // the basis object used in the parametrization.
// u0 that is the static part of the equation generating the control.
.baseControl() -> Control
dGROUP has one specialized accessor,
.numberOfResets() -> count_t;
This counts the number of superiterations used in the optimization.
All hooks are callable objects (e.g. lambdas) that are called at particular times during the algorithm. Each hook has a particular interface allowing the user to modify the behavior of the algorithm (e.g. stopping criteria and data collection) but also more algorithmic things such as calculating the stepsize. They are created by calling the corresponding make-function with a callable object (functor) of the correct interface. None of the make-functions take any other parameters.
In general, the algorithm passes a const reference to itself to the hook, so using generic lambdas is recommended.
A policy that allows deciding when the algorithm should stop. Always called at the end of an iteration after the collector.
makeStopper(Callable stopper_functor);
Must be callable as (const Alg&) -> boolean
. When it returns true, the optimization stops.
Predefined stoppers,
// stops granted that targetfidelity is reached.
makeFidelityStopper(real targetFidelity = 0.99);
// stops granted that the algorithm has taken 'maxIterations' iterations.
makeIterationStopper(count_t maxIterations);
Different stoppers can be added, and the resulting stopper will stop whenever one of the individual stoppers returns true. An example is,
auto fStopper = makeFidelityStopper(0.999);
auto iStopper = makeIterationStopper(10000);
auto myStopper = makeStopper([](const auto& alg)
{
return alg.problem().cost() < 0.001;
});
// stops when either fidelity > 0.999, iteration>10000, or cost < 0.001
auto fullStopper = fStopper+iStopper+myStopper;
The collector is the way for the user to collect and display information during the simulation. It is always called at the end of an iteration right before the stopper.
makeCollector(Callable collector_functor);
This must be callable as (const Alg&)
returning nothing.
Predefined collectors,
makeEmptyCollector(); // does nothing.
Similar to stoppers collectors can be added together, in which case they are called in the order they are added together. An example is,
auto coutCollector = makeCollector([](const auto& alg)
{
std::cout << "iteration: " << alg.iteration() << "; cost: " << alg.problem().cost() << std::endl;
}
auto dc = DataContainer();
auto dcCollector = makeCollector([&dc](const auto& alg)
{
dc["fidelities"].append(alg.problem().fidelity());
}
// will first write stuff to std::cout, then save the fidelity to the datacontainer.
auto fullCollector = coutCollector + dcCollector;
StepSizeFinders calculate the step size for an optimal control algorithm and they are generally called just after calculating the step direction.
makeStepSizeFinder(Callable)
It must be callable as (Const Alg&) -> real
returning the stepsize.
Predefined stepSizeFinders,
makeInterpolatingStepSizeFinder(real maxStepSize, real defaultInitialStepSizeGuess,
real minStepSize = 0, real xtol = 1e-10,
count_t maxFunctionEvaluations = 100, real c1 = 1e-4,
real c2 = 0.9);
This uses an interpolating linesearch-algorithm that fulfills the strong Wolfe-conditions based on the MINPACK subroutine cvsrch. The parameters are,
BfgsRestarters are similar to stoppers but give a condition for BFGS-algorithms to restart meaning that it resets Hessian approximation to the identity. It is called at the end of each iteration right after the stopper.
makeBfgsRestarter(Callable restart_functor)
It has the same interface as a stoppers (const Alg&) -> bool
. If true, the approximation of the Hessian is reset and BFGS restarts.
Predefined restarters,
makeStepTimesYBfgsRestarter(real tol, verbose = false);
makeIntervalBfgsRestarter(count_t N, verbose = false);
The first restarter will check the BFGS-accessor .stepTimesY()
and restart if it is below tol. The second will restart Bfgs every N iterations. If verbose is true, they will output "restarted bfgs" to the console when triggered.
BfgsRestarters can be summed just like stoppers. The default is probably sufficient in most cases.
DressedRestarters are similar to stoppers and BfgsRestarters, but give a condition for dressed algorithms to generate new bases.
makeDressedRestarter(Callable restart_functor)
Same interface as stoppers (const Alg&) -> bool
. If true a new basis will be generated for the algorithm starting a new superiteration.
Predefined restarters,
makeCostDecreaseDressedRestarter(real minimalCostDecrease, verbose = false);
makeStepSizeDressedRestarter(real stepSizeTolerance, verbose = false);
The first will restart if the cost between consecutive iterations has decreased less than minimalCostDecrease. The second will restart if the stepSize is less than stepSizeTolerance. In either case, verbose will write a message to std output if triggered.
BasisMakers are specific hooks for dressed algorithms. They are called when the dressedRestarter triggers a restart and they return the new basis.
makebasisMaker(Callable maker_functor)
The interface is (void) -> bases
. Unlike other hooks, the basisMaker works without any parameters, as it builds a new basis independently.
Predefined basisMakers,
makeRandSineBasisMaker(count_t basisSize, Control shapeFunction, real maxRand)
This creates a basis with basisSize elements of the form $S(t) \sin\left((n+r) \frac{\pi t}{T}\right)$, where $n$ increases for each basis-element from $n = 1$. $r$ is a random factor in the range [-maxRand, maxRand], and $S(t)$ is a shape function that ensures the basis is continuously zero for $t=0$ and $t=T$. An example is,
const auto basisMaker = makeBasisMaker([basisSize, maxRand, shapeFunction]()
{
return shapeFunction*makeSineBasis(M, shapeFunction.metaData(), maxRand);
});
The AUTO_MEMBER
macro can be used to create members based on makeFunctions. It is defined as:
#define AUTO_MEMBER(memberName, makeFunction)
remove_cvref_t<decltype(make_function)> memberName = makeFunction
It allows easily creating default initialized member-variables from make-functions, when the type is not part of the API.