# Optimisation framework¶

This package allows rapid prototyping of optimisation-based reconstruction problems, i.e. defining and solving different optimization problems to enforce different properties on the reconstructed image.

Firstly, it provides an object-oriented framework for defining mathematical operators and functions as well a collection of useful example operators and functions. Both smooth and non-smooth functions can be used.

Further, it provides a number of high-level generic implementations of optimisation algorithms to solve genericlly formulated optimisation problems constructed from operator and function objects.

The fundamental components are:

• Operator: A class specifying a (currently linear) operator
• Function: A class specifying mathematical functions such as a least squares data fidelity.
• Algorithm: Implementation of an iterative optimisation algorithm to solve a particular generic optimisation problem. Algorithms are iterable Python object which can be run in a for loop. Can be stopped and warm restarted.

To be able to express more advanced optimisation problems we developed the Block Framework, which provides a generic strategy to treat variational problems in the following form:

$\min \text{Regulariser} + \text{Fidelity}$

The block framework consists of:

• BlockDataContainer
• BlockFunction
• BlockOperator

BlockDataContainer holds DataContainer as column vector. It is possible to do basic algebra between BlockDataContainer s and with numbers, list or numpy arrays.

BlockFunction acts on BlockDataContainer as a separable sum function:

\begin{align}\begin{aligned}f = [f_1,...,f_n] \newline\\f([x_1,...,x_n]) = f_1(x_1) + .... + f_n(x_n)\end{aligned}\end{align}

BlockOperator represent a block matrix with operators

$\begin{split}K = \begin{bmatrix} A_{1} & A_{2} \\ A_{3} & A_{4} \\ A_{5} & A_{6} \end{bmatrix}_{(3,2)} * \quad \underbrace{\begin{bmatrix} x_{1} \\ x_{2} \end{bmatrix}_{(2,1)}}_{\textbf{x}} = \begin{bmatrix} A_{1}x_{1} + A_{2}x_{2}\\ A_{3}x_{1} + A_{4}x_{2}\\ A_{5}x_{1} + A_{6}x_{2}\\ \end{bmatrix}_{(3,1)} = \begin{bmatrix} y_{1}\\ y_{2}\\ y_{3} \end{bmatrix}_{(3,1)} = \textbf{y}\end{split}$

Column: Share the same domains $$X_{1}, X_{2}$$

Rows: Share the same ranges $$Y_{1}, Y_{2}, Y_{3}$$

$K : (X_{1}\times X_{2}) \rightarrow (Y_{1}\times Y_{2} \times Y_{3})$

$$A_{1}, A_{3}, A_{5}$$: share the same domain $$X_{1}$$ and $$A_{2}, A_{4}, A_{6}$$: share the same domain $$X_{2}$$

$\begin{split}A_{1}: X_{1} \rightarrow Y_{1} \\ A_{3}: X_{1} \rightarrow Y_{2} \\ A_{5}: X_{1} \rightarrow Y_{3} \\ A_{2}: X_{2} \rightarrow Y_{1} \\ A_{4}: X_{2} \rightarrow Y_{2} \\ A_{6}: X_{2} \rightarrow Y_{3}\end{split}$

For instance with these ingredients one may write the following objective function,

$\alpha ||\nabla u||_{2,1} + ||u - g||_2^2$

where $$g$$ represent the measured values, $$u$$ the solution $$\nabla$$ is the gradient operator, $$|| ~~ ||_{2,1}$$ is a norm for the output of the gradient operator and $$|| x-g ||^2_2$$ is least squares fidelity function as

\begin{align}\begin{aligned}\begin{split}K = \begin{bmatrix} \nabla \\ \mathbb{1} \end{bmatrix}\end{split}\\F(x) = \Big[ \alpha \lVert ~x~ \rVert_{2,1} ~~ , ~~ || x - g||_2^2 \Big]\\w = [ u ]\end{aligned}\end{align}

Then we have rewritten the problem as

$F(Kw) = \alpha \left\lVert \nabla u \right\rVert_{2,1} + ||u-g||^2_2$

Which in Python would be like

op1 = Gradient(ig, correlation=Gradient.CORRELATION_SPACE)
op2 = Identity(ig, ag)

# Create BlockOperator
K = BlockOperator(op1, op2, shape=(2,1) )

# Create functions
F = BlockFunction(alpha * MixedL21Norm(), 0.5 * L2NormSquared(b=noisy_data))


## Algorithm¶

A number of generic algorithm implementations are provided including Gradient Descent (GD), Conjugate Gradient Least Squares (CGLS), Simultaneous Iterative Reconstruction Technique (SIRT), Primal Dual Hybrid Gradient (PDHG) and Fast Iterative Shrinkage Thresholding Algorithm (FISTA).

An algorithm is designed for a particular generic optimisation problem accepts and number of :code:Functions and/or :code:Operators as input to define a specific instance of the generic optimisation problem to be solved. They are iterable objects which can be run in a for loop. The user can provide a stopping criterion different than the default max_iteration.

New algorithms can be easily created by extending the Algorithm class. The user is required to implement only 4 methods: set_up, __init__, update and update_objective.

• set_up and __init__ are used to configure the algorithm
• update is the actual iteration updating the solution
• update_objective defines how the objective is calculated.

For example, the implementation of the update of the Gradient Descent algorithm to minimise a Function will only be:

def update(self):
self.x += -self.rate * self.objective_function.gradient(self.x)
def update_objective(self):
self.loss.append(self.objective_function(self.x))


The Algorithm provides the infrastructure to continue iteration, to access the values of the objective function in subsequent iterations, the time for each iteration, and to provide a nice print to screen of the status of the optimisation.

class ccpi.optimisation.algorithms.Algorithm(**kwargs)[source]

Base class for iterative algorithms

provides the minimal infrastructure.

Algorithms are iterables so can be easily run in a for loop. They will stop as soon as the stop cryterion is met. The user is required to implement the set_up, __init__, update and and update_objective methods

A courtesy method run is available to run n iterations. The method accepts a callback function that receives the current iteration number and the actual objective value and can be used to trigger print to screens and other user interactions. The run method will stop when the stopping cryterion is met.

__init__(**kwargs)[source]

Constructor

Set the minimal number of parameters:

Parameters: max_iteration (int, optional, default 0) – maximum number of iterations update_objectice_interval – the interval every which we would save the current objective. 1 means every iteration, 2 every 2 iteration and so forth. This is by default 1 and should be increased when evaluating the objective is computationally expensive.
__iter__()[source]

Algorithm is an iterable

__next__()[source]

Algorithm is an iterable

calling this method triggers update and update_objective

__weakref__

list of weak references to the object (if defined)

get_last_loss(**kwargs)[source]

Returns the last stored value of the loss function

if update_objective_interval is 1 it is the value of the objective at the current iteration. If update_objective_interval > 1 it is the last stored value.

get_last_objective(**kwargs)[source]

alias to get_last_loss

get_output()[source]

Returns the solution found

loss

returns the list of the values of the objective during the iteration

The length of this list may be shorter than the number of iterations run when the update_objective_interval > 1

max_iteration

gets the maximum number of iterations

max_iteration_stop_cryterion()[source]

default stop cryterion for iterative algorithm: max_iteration reached

next()[source]

Algorithm is an iterable

python2 backwards compatibility

objective

alias of loss

run(iterations=None, verbose=True, callback=None, very_verbose=False)[source]

run n iterations and update the user with the callback if specified

Parameters: iterations – number of iterations to run. If not set the algorithm will run until max_iteration or until stop criterion is reached verbose – toggles verbose output to screen callback – is a function that receives: current iteration number, last objective function value and the current solution very_verbose – bool, useful for algorithms with primal and dual objectives (PDHG), prints to screen both primal and dual
set_up(*args, **kwargs)[source]

Set up the algorithm

should_stop()[source]

default stopping cryterion: number of iterations

The user can change this in concrete implementatition of iterative algorithms.

update()[source]

A single iteration of the algorithm

update_objective()[source]

calculates the objective with the current solution

verbose_output(verbose=False)[source]

Creates a nice tabulated output

class ccpi.optimisation.algorithms.GradientDescent(x_init=None, objective_function=None, step_size=None, **kwargs)[source]

armijo_rule()[source]

Applies the Armijo rule to calculate the step size (step_size)

The Armijo rule runs a while loop to find the appropriate step_size by starting from a very large number (alpha). The step_size is found by dividing alpha by 2 in an iterative way until a certain criterion is met. To avoid infinite loops, we add a maximum number of times the while loop is run.

This rule would allow to reach a minimum step_size of 10^-alpha.

if alpha = numpy.power(10,gamma) delta = 3 step_size = numpy.power(10, -delta) with armijo rule we can get to step_size from initial alpha by repeating the while loop k times where alpha / 2^k = step_size 10^gamma / 2^k = 10^-delta 2^k = 10^(gamma+delta) k = gamma+delta / log10(2) approx 3.3 * (gamma+delta)

if we would take by default delta = gamma kmax = numpy.ceil ( 2 * gamma / numpy.log10(2) ) kmax = numpy.ceil (2 * numpy.log10(alpha) / numpy.log10(2))

set_up(x_init, objective_function, step_size)[source]

initialisation of the algorithm

Parameters: x_init – initial guess objective_function – objective function to be minimised step_size – step size
should_stop()[source]

default stopping cryterion: number of iterations

The user can change this in concrete implementatition of iterative algorithms.

update()[source]

Single iteration

update_objective()[source]

calculates the objective with the current solution

class ccpi.optimisation.algorithms.CGLS(x_init=None, operator=None, data=None, tolerance=1e-06, **kwargs)[source]

Conjugate Gradient Least Squares algorithm

Problem:

$\min || A x - b ||^2_2$

Parameters :

:parameter operator : Linear operator for the inverse problem :parameter x_init : Initial guess ( Default x_init = 0) :parameter data : Acquired data to reconstruct :parameter tolerance: Tolerance/ Stopping Criterion to end CGLS algorithm
Reference:
https://web.stanford.edu/group/SOL/software/cgls/
flag()[source]

returns whether the tolerance has been reached

set_up(x_init, operator, data, tolerance=1e-06)[source]

initialisation of the algorithm

Parameters: operator – Linear operator for the inverse problem x_init – Initial guess ( Default x_init = 0) data – Acquired data to reconstruct tolerance – Tolerance/ Stopping Criterion to end CGLS algorithm
should_stop()[source]

stopping criterion

update()[source]

single iteration

update_objective()[source]

calculates the objective with the current solution

class ccpi.optimisation.algorithms.FISTA(x_init=None, f=None, g=<ccpi.optimisation.functions.Function.ZeroFunction object>, **kwargs)[source]

Fast Iterative Shrinkage-Thresholding Algorithm

Problem :

$\min_{x} f(x) + g(x)$

Parameters :

param x_init: Initial guess ( Default x_init = 0) Differentiable function Convex function with ” simple ” proximal operator

Reference:

Beck, A. and Teboulle, M., 2009. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM journal on imaging sciences,2(1), pp.183-202.
__init__(x_init=None, f=None, g=<ccpi.optimisation.functions.Function.ZeroFunction object>, **kwargs)[source]

FISTA algorithm creator

initialisation can be done at creation time if all proper variables are passed or later with set_up

Optional parameters:

Parameters: x_init – Initial guess ( Default x_init = 0) f – Differentiable function g – Convex function with ” simple ” proximal operator
set_up(x_init, f, g=<ccpi.optimisation.functions.Function.ZeroFunction object>)[source]

initialisation of the algorithm

Parameters: x_init – Initial guess ( Default x_init = 0) f – Differentiable function g – Convex function with ” simple ” proximal operator
update()[source]

A single iteration of the algorithm

update_objective()[source]

calculates the objective with the current solution

class ccpi.optimisation.algorithms.PDHG(f=None, g=None, operator=None, tau=None, sigma=1.0, x_init=None, **kwargs)[source]

Primal Dual Hybrid Gradient

Problem:

$\min_{x} f(Kx) + g(x)$
Parameters: operator – Linear Operator = K f – Convex function with “simple” proximal of its conjugate. g – Convex function with “simple” proximal sigma – Step size parameter for Primal problem tau – Step size parameter for Dual problem

Remark: Convergence is guaranted provided that

$\tau \sigma \|K\|^{2} <1$

Reference:

(a) A. Chambolle and T. Pock (2011), “A first-order primal–dual algorithm for convex problems with applications to imaging”, J. Math. Imaging Vision 40, 120–145.

(b) E. Esser, X. Zhang and T. F. Chan (2010), “A general framework for a class of first order primal–dual algorithms for convex optimization in imaging science”, SIAM J. Imaging Sci. 3, 1015–1046.

objective

alias of loss

set_up(f, g, operator, tau=None, sigma=1.0, x_init=None)[source]

initialisation of the algorithm

Parameters: operator – a Linear Operator f – Convex function with “simple” proximal of its conjugate. g – Convex function with “simple” proximal sigma – Step size parameter for Primal problem tau – Step size parameter for Dual problem x_init – Initial guess ( Default x_init = 0)
update()[source]

A single iteration of the algorithm

update_objective()[source]

calculates the objective with the current solution

class ccpi.optimisation.algorithms.SIRT(x_init=None, operator=None, data=None, constraint=None, **kwargs)[source]

Simultaneous Iterative Reconstruction Technique

Problem:

$A x = b$
Parameters: x_init – Initial guess operator – Linear operator for the inverse problem data – Acquired data to reconstruct constraint – Function proximal method e.g. $$x\in[0, 1]$$, IndicatorBox to enforce box constraints Default is None).
set_up(x_init, operator, data, constraint=None)[source]

initialisation of the algorithm

Parameters: x_init – Initial guess operator – Linear operator for the inverse problem data – Acquired data to reconstruct constraint – Function proximal method e.g. $$x\in[0, 1]$$, IndicatorBox to enforce box constraints Default is None).
update()[source]

A single iteration of the algorithm

update_objective()[source]

calculates the objective with the current solution

## Operator¶

The two most important methods are direct and adjoint methods that describe the result of applying the operator, and its adjoint respectively, onto a compatible DataContainer input. The output is another DataContainer object or subclass hereof. An important special case is to represent the tomographic forward and backprojection operations.

### Operator base classes¶

All operators extend the Operator class. A special class is the LinearOperator which represents an operator for which the adjoint operation is defined. A ScaledOperator represents the multiplication of any operator with a scalar.

class ccpi.optimisation.operators.Operator(domain_geometry, **kwargs)[source]

Operator that maps from a space X -> Y

__init__(domain_geometry, **kwargs)[source]

Creator

Parameters: domain_geometry – domain of the operator range_geometry (optional, default None) – range of the operator
__neg__()[source]

Return -self

__rmul__(scalar)[source]

Defines the multiplication by a scalar on the left

returns a ScaledOperator

__sub__(other)[source]

Returns the subtraction of the operators.

__weakref__

list of weak references to the object (if defined)

calculate_norm(**kwargs)[source]

Calculates the norm of the Operator

direct(x, out=None)[source]

Returns the application of the Operator on x

domain_geometry()[source]

Returns the domain of the Operator: X space

is_linear()[source]

Returns if the operator is linear

norm(**kwargs)[source]

Returns the norm of the Operator

Calling norm triggers the calculation of the norm of the operator. Normally this is a computationally expensive task, therefore we store the result of norm into a member of the class. If the calculation has already run, following calls to norm just return the saved member. It is possible to force recalculation by setting the optional force parameter. Notice that norm doesn’t take notice of how many iterations or of the initialisation of the PowerMethod, so in case you want to recalculate by setting a higher number of iterations or changing the starting point or both you need to set force=True

Parameters: iterations (int, optional, default = 25) – number of iterations to run x_init (same type as domain, a subclass of DataContainer, optional, default None) – starting point for the iteration in the operator domain force (boolean, default False) – forces the recalculation of the norm
range_geometry()[source]

Returns the range of the Operator: Y space

class ccpi.optimisation.operators.LinearOperator(domain_geometry, **kwargs)[source]

A Linear Operator that maps from a space X <-> Y

static PowerMethod(operator, iterations, x_init=None)[source]

Power method to calculate iteratively the Lipschitz constant

Parameters: operator (LinearOperator) – input operator iterations – number of iterations to run x_init – starting point for the iteration in the operator domain tuple with: L, list of L at each iteration, the data the iteration worked on.
__init__(domain_geometry, **kwargs)[source]

Creator

Parameters: domain_geometry – domain of the operator range_geometry (optional, default None) – range of the operator
adjoint(x, out=None)[source]

returns the adjoint/inverse operation

only available to linear operators

calculate_norm(**kwargs)[source]

Returns the norm of the LinearOperator as calculated by the PowerMethod

Parameters: iterations (int, optional, default = 25) – number of iterations to run x_init (same type as domain, a subclass of DataContainer, optional, None) – starting point for the iteration in the operator domain force (boolean, default False) – forces the recalculation of the norm
static dot_test(operator, domain_init=None, range_init=None, verbose=False, **kwargs)[source]

Does a dot linearity test on the operator

Evaluates if the following equivalence holds

$Ax\times y = y \times A^Tx$

The equivalence is tested within a user specified precision

abs(desired-actual) < 1.5 * 10**(-decimal)

Parameters: operator – operator to test range_init – optional initialisation container in the operator range domain_init – optional initialisation container in the operator domain decimal (int, optional, default 4) – desired precision boolean, True if the test is passed.
is_linear()[source]

Returns if the operator is linear

class ccpi.optimisation.operators.ScaledOperator(operator, scalar, **kwargs)[source]

A class to represent the scalar multiplication of an Operator with a scalar. It holds an operator and a scalar. Basically it returns the multiplication of the result of direct and adjoint of the operator with the scalar. For the rest it behaves like the operator it holds.

Parameters: operator – a Operator or LinearOperator scalar – a scalar multiplier
Example:
The scaled operator behaves like the following:
sop = ScaledOperator(operator, scalar)
sop.direct(x) = scalar * operator.direct(x)
sop.norm() = operator.norm()
sop.range_geometry() = operator.range_geometry()
sop.domain_geometry() = operator.domain_geometry()

__init__(operator, scalar, **kwargs)[source]

creator

Parameters: operator – a Operator or LinearOperator scalar (float) – a scalar multiplier
adjoint(x, out=None)[source]

direct(x, out=None)[source]

direct method

is_linear()[source]

returns whether the operator is linear

Returns: boolean
norm(**kwargs)[source]

norm of the operator

class ccpi.optimisation.operators.SumOperator(operator1, operator2)[source]
__init__(operator1, operator2)[source]

Creator

Parameters: domain_geometry – domain of the operator range_geometry (optional, default None) – range of the operator
calculate_norm(**kwargs)[source]

Calculates the norm of the Operator

direct(x, out=None)[source]

Returns the application of the Operator on x

is_linear()[source]

Returns if the operator is linear

### Trivial operators¶

Trivial operators are the following.

class ccpi.optimisation.operators.Identity(domain_geometry, range_geometry=None)[source]

Identity: Id: X -> Y, Id(x) = xin Y

X : gm_domain Y : gm_range ( Default: Y = X )

__init__(domain_geometry, range_geometry=None)[source]

Creator

Parameters: domain_geometry – domain of the operator range_geometry (optional, default None) – range of the operator
adjoint(x, out=None)[source]

Returns Id(x)

calculate_norm(**kwargs)[source]

Evaluates operator norm of Identity

direct(x, out=None)[source]

Returns Id(x)

class ccpi.optimisation.operators.ZeroOperator(domain_geometry, range_geometry=None)[source]

ZeroOperator: O: X -> Y, maps any element of $$x\in X$$ into the zero element $$\in Y, O(x) = O_{Y}$$

Parameters: gm_domain – domain of the operator gm_range – range of the operator, default: same as domain

Note:

\begin{align}\begin{aligned}O^{*}: Y^{*} -> X^{*} \text{(Adjoint)}\\< O(x), y > = < x, O^{*}(y) >\end{aligned}\end{align}
__init__(domain_geometry, range_geometry=None)[source]

Creator

Parameters: domain_geometry – domain of the operator range_geometry (optional, default None) – range of the operator
adjoint(x, out=None)[source]

Returns O^{*}(y)

calculate_norm(**kwargs)[source]

Evaluates operator norm of ZeroOperator

direct(x, out=None)[source]

Returns O(x)

class ccpi.optimisation.operators.LinearOperatorMatrix(A)[source]

Matrix wrapped into a LinearOperator

Param: a numpy matrix
__init__(A)[source]

creator

Parameters: A – numpy ndarray representing a matrix
adjoint(x, out=None)[source]

returns the adjoint/inverse operation

only available to linear operators

calculate_norm(**kwargs)[source]

Returns the norm of the LinearOperator as calculated by the PowerMethod

Parameters: iterations (int, optional, default = 25) – number of iterations to run x_init (same type as domain, a subclass of DataContainer, optional, None) – starting point for the iteration in the operator domain force (boolean, default False) – forces the recalculation of the norm
direct(x, out=None)[source]

Returns the application of the Operator on x

In the following the required classes for the implementation of the Gradient operator.

class ccpi.optimisation.operators.Gradient(domain_geometry, bnd_cond='Neumann', **kwargs)[source]
Gradient Operator: Computes first-order forward/backward differences on
2D, 3D, 4D ImageData under Neumann/Periodic boundary conditions
Parameters: Keyword Arguments: gm_domain (ImageGeometry) – Set up the domain of the function bnd_cond (str, optional) – Set the boundary conditions to use ‘Neumann’ or ‘Periodic’, defaults to ‘Neumann’ **kwargs – See below correlation (str) – ‘Space’ or ‘SpaceChannels’, defaults to ‘Space’ backend (str) – ‘c’ or ‘numpy’, defaults to ‘c’ if correlation is ‘SpaceChannels’ or channels = 1 num_threads (int) – If backend is ‘c’ specify the number of threads to use. Default is number of cpus/2 Example (2D): .. math: \nabla : X -> Y \\ u\in X, \nabla(u) = [\partial_{y} u, \partial_{x} u] \\ u^{*}\in Y, \nabla^{*}(u^{*}) = \partial_{y} v1 + \partial_{x} v2 
__init__(domain_geometry, bnd_cond='Neumann', **kwargs)[source]

Constructor method

adjoint(x, out=None)[source]

Computes the first-order backward differences

Parameters: x (BlockDataContainer) – Gradient images for each dimension in ImageGeometry domain out (ImageData, optional) – pre-allocated output memory to store result result data if not passed as parameter ImageData
direct(x, out=None)[source]

Computes the first-order forward differences

Parameters: x (ImageData) – Image data out (BlockDataContainer, optional) – pre-allocated output memory to store result result data if not passed as parameter BlockDataContainer
class ccpi.optimisation.operators.FiniteDiff(gm_domain, gm_range=None, direction=0, bnd_cond='Neumann')[source]

Finite Difference Operator:

Computes first-order forward/backward differences
on 2D, 3D, 4D ImageData under Neumann/Periodic boundary conditions

Order of the Gradient ( ImageGeometry may contain channels ):

Grad_order = ['channels', 'direction_z', 'direction_y', 'direction_x']
Grad_order = ['channels', 'direction_y', 'direction_x']
Grad_order = ['direction_z', 'direction_y', 'direction_x']
Grad_order = ['channels', 'direction_z', 'direction_y', 'direction_x']

__init__(gm_domain, gm_range=None, direction=0, bnd_cond='Neumann')[source]

creator

Parameters: gm_domain (AcquisitionGeometry or ImageGeometry) – domain of the operator gm_range (AcquisitionGeometry or ImageGeometry, optional) – optional range of the operator direction (int, optional, default 0) – optional axis in the input DataContainer along which to calculate the finite differences, default 0 bnd_cond (str, default Neumann) – boundary condition, either Neumann or Periodic.
adjoint(x, out=None)[source]

returns the adjoint/inverse operation

only available to linear operators

direct(x, out=None)[source]

Returns the application of the Operator on x

class ccpi.optimisation.operators.SparseFiniteDiff(domain_geometry, range_geometry=None, direction=0, bnd_cond='Neumann')[source]

Create Sparse Matrices for the Finite Difference Operator

__init__(domain_geometry, range_geometry=None, direction=0, bnd_cond='Neumann')[source]

Initialize self. See help(type(self)) for accurate signature.

__weakref__

list of weak references to the object (if defined)

class ccpi.optimisation.operators.SymmetrizedGradient(domain_geometry, bnd_cond='Neumann', **kwargs)[source]

Symmetrized Gradient Operator: E: V -> W

V : range of the Gradient Operator W : range of the Symmetrized Gradient

Example (2D):

\begin{align}\begin{aligned}\begin{split}v = (v1, v2) \\\end{split}\\\begin{split}Ev = 0.5 * ( \nabla\cdot v + (\nabla\cdot c)^{T} ) \\\end{split}\\\begin{split}\begin{matrix} \partial_{y} v1 & 0.5 * (\partial_{x} v1 + \partial_{y} v2) \\ 0.5 * (\partial_{x} v1 + \partial_{y} v2) & \partial_{x} v2 \end{matrix}\end{split}\end{aligned}\end{align}
__init__(domain_geometry, bnd_cond='Neumann', **kwargs)[source]

creator

Parameters: domain_geometry – domain of the operator bnd_cond (str, optional, default Neumann) – boundary condition, either Neumann or Periodic. correlation (str, optional, default Channel) – SpaceChannel or Channel
adjoint(x, out=None)[source]

returns the adjoint/inverse operation

only available to linear operators

direct(x, out=None)[source]

Returns E(v)

### Shrinkage operator¶

class ccpi.optimisation.operators.ShrinkageOperator[source]

Proximal Operator for .. math:: f(x) = || x ||_{1}

prox_{tau * f}(x) = x.sign() * max( |x| - tau, 0 )

__call__(x, tau, out=None)[source]

Call self as a function.

__init__()[source]

Initialize self. See help(type(self)) for accurate signature.

__weakref__

list of weak references to the object (if defined)

## Function¶

A Function represents a mathematical function of one or more inputs and is intended to accept DataContainers as input as well as any additional parameters.

Fixed parameters can be passed in during the creation of the function object. The methods of the function reflect the properties of it, for example, if the function represented is differentiable the function should contain a method gradient which should return the gradient of the function evaluated at an input point. If the function is not differentiable but allows a simple proximal operator, the method proximal should return the proximal operator evaluated at an input point. The function value is evaluated by calling the function itself, e.g. f(x) for a Function f and input point x.

### Base classes¶

class ccpi.optimisation.functions.Function(L=None)[source]

Abstract class representing a function

Parameters: L – Lipschitz constant of the gradient of the function F(x), when it is differentiable. domain – The domain of the function.
__add__(other)[source]

Returns the sum of the functions.

Cases: a) the sum of two functions $$(F_{1}+F_{2})(x) = F_{1}(x) + F_{2}(x)$$
1. the sum of a function with a scalar $$(F_{1}+scalar)(x) = F_{1}(x) + scalar$$
__call__(x)[source]

Returns the value of the function F at x: $$F(x)$$

__init__(L=None)[source]

Initialize self. See help(type(self)) for accurate signature.

__radd__(other)[source]

__rmul__(scalar)[source]

Returns a function multiplied by a scalar.

__sub__(other)[source]

Returns the subtraction of the functions.

__weakref__

list of weak references to the object (if defined)

centered_at(center)[source]

Returns a translated function, namely if we have a function $$F(x)$$ the center is at the origin. TranslateFunction is $$F(x - b)$$ and the center is at point b.

convex_conjugate(x)[source]

Returns the convex conjugate of function $$F$$ at $$x^{*}$$,

$F^{*}(x^{*}) = \underset{x^{*}}{\sup} <x^{*}, x> - F(x)$
gradient(x, out=None)[source]

Returns the value of the gradient of function F at x, if it is differentiable

$F'(x)$
proximal(x, tau, out=None)[source]

Returns the proximal operator of function $$\tau F$$ at x .. math:: mathrm{prox}_{tau F}(x) = underset{z}{mathrm{argmin}} frac{1}{2}|z - x|^{2} + tau F(z)

proximal_conjugate(x, tau, out=None)[source]

Returns the proximal operator of the convex conjugate of function $$\tau F$$ at $$x^{*}$$

$\mathrm{prox}_{\tau F^{*}}(x^{*}) = \underset{z^{*}}{\mathrm{argmin}} \frac{1}{2}\|z^{*} - x^{*}\|^{2} + \tau F^{*}(z^{*})$

Due to Moreau’s identity, we have an analytic formula to compute the proximal operator of the convex conjugate $$F^{*}$$

$\mathrm{prox}_{\tau F^{*}}(x) = x - \tau\mathrm{prox}_{\tau^{-1} F}(\tau^{-1}x)$
class ccpi.optimisation.functions.ScaledFunction(function, scalar)[source]

ScaledFunction represents the scalar multiplication with a Function.

Let a function F then and a scalar $$\alpha$$.

If $$G(x) = \alpha F(x)$$ then:

1. $$G(x) = \alpha F(x)$$ ( __call__ method )
2. $$G'(x) = \alpha F'(x)$$ ( gradient method )
3. $$G^{*}(x^{*}) = \alpha F^{*}(\frac{x^{*}}{\alpha})$$ ( convex_conjugate method )
4. $$\mathrm{prox}_{\tau G}(x) = \mathrm{prox}_{(\tau\alpha) F}(x)$$ ( proximal method )
__call__(x, out=None)[source]

Returns the value of the scaled function.

$G(x) = \alpha F(x)$
__init__(function, scalar)[source]

Initialize self. See help(type(self)) for accurate signature.

convex_conjugate(x)[source]

Returns the convex conjugate of the scaled function.

$G^{*}(x^{*}) = \alpha F^{*}(\frac{x^{*}}{\alpha})$
gradient(x, out=None)[source]

Returns the gradient of the scaled function.

$G'(x) = \alpha F'(x)$
proximal(x, tau, out=None)[source]

Returns the proximal operator of the scaled function.

$\mathrm{prox}_{\tau G}(x) = \mathrm{prox}_{(\tau\alpha) F}(x)$
proximal_conjugate(x, tau, out=None)[source]

This returns the proximal operator for the function at x, tau

class ccpi.optimisation.functions.ConstantFunction(constant=0)[source]

ConstantFunction: $$F(x) = constant, constant\in\mathbb{R}$$

__call__(x)[source]

Returns the value of the function, $$F(x) = constant$$

__init__(constant=0)[source]

Initialize self. See help(type(self)) for accurate signature.

convex_conjugate(x)[source]

The convex conjugate of constant function $$F(x) = c\in\mathbb{R}$$ is

$\begin{split}F(x^{*}) = \begin{cases} -c, & if x^{*} = 0\\ \infty, & \mbox{otherwise} \end{cases}\end{split}$

However, $$x^{*} = 0$$ only in the limit of iterations, so in fact this can be infinity. We do not want to have inf values in the convex conjugate, so we have to penalise this value accordingly. The following penalisation is useful in the PDHG algorithm, when we compute primal & dual objectives for convergence purposes.

$F^{*}(x^{*}) = \sum \max\{x^{*}, 0\}$
gradient(x, out=None)[source]

Returns the value of the gradient of the function, $$F'(x)=0$$

proximal(x, tau, out=None)[source]

Returns the proximal operator of the constant function, which is the same element, i.e.,

$\mathrm{prox}_{ au F}(x) = x$
class ccpi.optimisation.functions.ZeroFunction[source]

ZeroFunction represents the zero function, $$F(x) = 0$$

__init__()[source]

Initialize self. See help(type(self)) for accurate signature.

class ccpi.optimisation.functions.TranslateFunction(function, center)[source]

TranslateFunction represents the translation of function F with respect to the center b.

Let a function F and consider $$G(x) = F(x - center)$$.

Function F is centered at 0, whereas G is centered at point b.

If $$G(x) = F(x - b)$$ then:

1. $$G(x) = F(x - b)$$ ( __call__ method )
2. $$G'(x) = F'(x - b)$$ ( gradient method )
3. $$G^{*}(x^{*}) = F^{*}(x^{*}) + <x^{*}, b >$$ ( convex_conjugate method )
4. $$\mathrm{prox}_{\tau G}(x) = \mathrm{prox}_{\tau F}(x - b) + b$$ ( proximal method )
__call__(x)[source]

Returns the value of the translated function.

$G(x) = F(x - b)$
__init__(function, center)[source]

Initialize self. See help(type(self)) for accurate signature.

convex_conjugate(x)[source]

Returns the convex conjugate of the translated function.

$G^{*}(x^{*}) = F^{*}(x^{*}) + <x^{*}, b >$
gradient(x, out=None)[source]

Returns the gradient of the translated function.

$G'(x) = F'(x - b)$
proximal(x, tau, out=None)[source]

Returns the proximal operator of the translated function.

$\mathrm{prox}_{\tau G}(x) = \mathrm{prox}_{\tau F}(x-b) + b$
class ccpi.optimisation.functions.SumFunctionScalar(function, constant)[source]

SumFunctionScalar represents the sum a function with a scalar.

$(F + scalar)(x) = F(x) + scalar$

Although SumFunction has no general expressions for

1. convex_conjugate
2. proximal
3. proximal_conjugate

if the second argument is a ConstantFunction then we can derive the above analytically.

__init__(function, constant)[source]

Initialize self. See help(type(self)) for accurate signature.

convex_conjugate(x)[source]

Returns the convex conjugate of a $$(F+scalar)$$

$(F+scalar)^{*}(x^{*}) = F^{*}(x^{*}) - scalar$
proximal(x, tau, out=None)[source]

Returns the proximal operator of $$F+scalar$$

$\mathrm{prox}_{ au (F+scalar)}(x) = \mathrm{prox}_{ au F}$

### Zero Function¶

class ccpi.optimisation.functions.ZeroFunction[source]

ZeroFunction represents the zero function, $$F(x) = 0$$

__init__()[source]

Initialize self. See help(type(self)) for accurate signature.

class ccpi.optimisation.functions.SumFunctionScalar(function, constant)[source]

SumFunctionScalar represents the sum a function with a scalar.

$(F + scalar)(x) = F(x) + scalar$

Although SumFunction has no general expressions for

1. convex_conjugate
2. proximal
3. proximal_conjugate

if the second argument is a ConstantFunction then we can derive the above analytically.

__init__(function, constant)[source]

Initialize self. See help(type(self)) for accurate signature.

convex_conjugate(x)[source]

Returns the convex conjugate of a $$(F+scalar)$$

$(F+scalar)^{*}(x^{*}) = F^{*}(x^{*}) - scalar$
proximal(x, tau, out=None)[source]

Returns the proximal operator of $$F+scalar$$

$\mathrm{prox}_{ au (F+scalar)}(x) = \mathrm{prox}_{ au F}$

### Constant Function¶

class ccpi.optimisation.functions.ConstantFunction(constant=0)[source]

ConstantFunction: $$F(x) = constant, constant\in\mathbb{R}$$

__call__(x)[source]

Returns the value of the function, $$F(x) = constant$$

__init__(constant=0)[source]

Initialize self. See help(type(self)) for accurate signature.

convex_conjugate(x)[source]

The convex conjugate of constant function $$F(x) = c\in\mathbb{R}$$ is

$\begin{split}F(x^{*}) = \begin{cases} -c, & if x^{*} = 0\\ \infty, & \mbox{otherwise} \end{cases}\end{split}$

However, $$x^{*} = 0$$ only in the limit of iterations, so in fact this can be infinity. We do not want to have inf values in the convex conjugate, so we have to penalise this value accordingly. The following penalisation is useful in the PDHG algorithm, when we compute primal & dual objectives for convergence purposes.

$F^{*}(x^{*}) = \sum \max\{x^{*}, 0\}$
gradient(x, out=None)[source]

Returns the value of the gradient of the function, $$F'(x)=0$$

proximal(x, tau, out=None)[source]

Returns the proximal operator of the constant function, which is the same element, i.e.,

$\mathrm{prox}_{ au F}(x) = x$

### Composition of operator and a function¶

This class allows the user to write a function which does the following:

$F ( x ) = G ( Ax )$

where $$A$$ is an operator. For instance the least squares function l2norm_ Norm2Sq can be expressed as

$F(x) = || Ax - b ||^2_2$
class ccpi.optimisation.functions.FunctionOperatorComposition(function, operator)[source]

Composition of a function with an operator as : $$(F \otimes A)(x) = F(Ax)$$

parameter function:
Function F
parameter operator:
Operator A

For general operator, we have no explicit formulas for convex_conjugate, proximal and proximal_conjugate

__call__(x)[source]

Returns $$F(Ax)$$

__init__(function, operator)[source]

creator

Parameters: A (Operator) – operator f (Function) – function
gradient(x, out=None)[source]

Return the gradient of F(Ax),

..math :: (F(Ax))’ = A^{T}F’(Ax)

### Indicator box¶

class ccpi.optimisation.functions.IndicatorBox(lower=-inf, upper=inf)[source]

Indicator function for box constraint

$\begin{split}f(x) = \mathbb{I}_{[a, b]} = \begin{cases} 0, \text{ if } x \in [a, b] \\ \infty, \text{otherwise} \end{cases}\end{split}$
__call__(x)[source]

Evaluates IndicatorBox at x

__init__(lower=-inf, upper=inf)[source]

creator

Parameters: lower (float, default = -numpy.inf) – lower bound upper (float, optional, default = numpy.inf) – upper bound
convex_conjugate(x)[source]

Convex conjugate of IndicatorBox at x

gradient(x)[source]

Returns the value of the gradient of function F at x, if it is differentiable

$F'(x)$
proximal(x, tau, out=None)[source]

Proximal operator of IndicatorBox at x

$prox_{\tau * f}(x)$
proximal_conjugate(x, tau, out=None)[source]

Proximal operator of the convex conjugate of IndicatorBox at x:

..math:: prox_{tau * f^{*}}(x)

### KullbackLeibler¶

class ccpi.optimisation.functions.KullbackLeibler(**kwargs)[source]

Kullback Leibler divergence function is defined as:

$\begin{split}F(u, v) = \begin{cases} u \log(\frac{u}{v}) - u + v & \mbox{ if } u > 0, v > 0\\ v & \mbox{ if } u = 0, v \ge 0 \\ \infty, & \mbox{otherwise} \end{cases} \end{split}$

where we use the $$0\log0 := 0$$ convention.

At the moment, we use build-in implemention of scipy, see https://docs.scipy.org/doc/scipy/reference/generated/scipy.special.kl_div.html

The Kullback-Leibler function is used as a fidelity term in minimisation problems where the acquired data follow Poisson distribution. If we denote the acquired data with $$b$$ then, we write

$\underset{i}{\sum} F(b_{i}, (v + \eta)_{i})$

where, $$\eta$$ is an additional noise.

Example: In the case of Positron Emission Tomography reconstruction $$\eta$$ represents scatter and random events contribution during the PET acquisition. Hence, in that case the KullbackLeibler fidelity measures the distance between $$\mathcal{A}v + \eta$$ and acquisition data $$b$$, where $$\mathcal{A}$$ is the projection operator.

This is related to PoissonLogLikelihoodWithLinearModelForMean definition that is used in PET reconstruction in the PET-MR software , see https://github.com/CCPPETMR and for more details in

http://stir.sourceforge.net/documentation/doxy/html/classstir_1_1PoissonLogLikelihoodWithLinearModelForMean.html

__call__(x)[source]

Returns the value of the KullbackLeibler function at $$(b, x + \eta)$$. To avoid infinity values, we consider only pixels/voxels for $$x+\eta\geq0$$.

__init__(**kwargs)[source]

Initialize self. See help(type(self)) for accurate signature.

convex_conjugate(x)[source]

Returns the value of the convex conjugate of the KullbackLeibler function at $$(b, x + \eta)$$.

$F^{*}(b, x + \eta) = - b \log(1-x^{*}) - <x^{*}, \eta>$
gradient(x, out=None)[source]

Returns the value of the gradient of the KullbackLeibler function at $$(b, x + \eta)$$.

$F'(b, x + \eta) = 1 - \frac{b}{x+\eta}$

We require the $$x+\eta>0$$ otherwise we have inf values.

log(datacontainer)[source]

calculates the in-place log of the datacontainer

proximal(x, tau, out=None)[source]

Returns the value of the proximal operator of the KullbackLeibler function at $$(b, x + \eta)$$.

$\mathrm{prox}_{\tau F}(x) = \frac{1}{2}\bigg( (x - \eta - \tau) + \sqrt{ (x + \eta - \tau)^2 + 4\tau b} \bigg)$

The proximal for the convex conjugate of $$F$$ is

$\mathrm{prox}_{\tau F^{*}}(x) = 0.5*((z + 1) - \sqrt{(z-1)^2 + 4 * \tau b})$

where $$z = x + \tau \eta$$

proximal_conjugate(x, tau, out=None)[source]

Proximal operator of the convex conjugate of KullbackLeibler at x:

$prox_{\tau * f^{*}}(x)$

### L1 Norm¶

class ccpi.optimisation.functions.L1Norm(**kwargs)[source]

L1Norm function

Consider the following cases:
1. $F(x) = ||x||_{1}$
2. $F(x) = ||x - b||_{1}$
__call__(x)[source]

Returns the value of the L1Norm function at x.

Consider the following cases:
1. $F(x) = ||x||_{1}$
2. $F(x) = ||x - b||_{1}$
__init__(**kwargs)[source]

creator

Cases considered (with/without data): a) $$f(x) = ||x||_{1}$$ b) $$f(x) = ||x - b||_{1}$$

Parameters: b (DataContainer, optional) – translation of the function
convex_conjugate(x)[source]

Returns the value of the convex conjugate of the L1Norm function at x. Here, we need to use the convex conjugate of L1Norm, which is the Indicator of the unit $$L^{\infty}$$ norm

Consider the following cases:

1. $F^{*}(x^{*}) = \mathbb{I}_{\{\|\cdot\|_{\infty}\leq1\}}(x^{*})$
2. $F^{*}(x^{*}) = \mathbb{I}_{\{\|\cdot\|_{\infty}\leq1\}}(x^{*}) + <x^{*},b>$
$\begin{split}\mathbb{I}_{\{\|\cdot\|_{\infty}\leq1\}}(x^{*}) = \begin{cases} 0, \mbox{if } \|x^{*}\|_{\infty}\leq1\\ \infty, \mbox{otherwise} \end{cases}\end{split}$
proximal(x, tau, out=None)[source]

Returns the value of the proximal operator of the L1Norm function at x.

Consider the following cases:

1. $\mathrm{prox}_{\tau F}(x) = \mathrm{ShinkOperator}(x)$
2. $\mathrm{prox}_{\tau F}(x) = \mathrm{ShinkOperator}(x) + b$

where,

$\mathrm{prox}_{\tau F}(x) = \mathrm{ShinkOperator}(x) = sgn(x) * \max\{ |x| - \tau, 0 \}$

### Squared L2 norm¶

class ccpi.optimisation.functions.L2NormSquared(**kwargs)[source]

L2NormSquared function: $$F(x) = \| x\|^{2}_{2} = \underset{i}{\sum}x_{i}^{2}$$

Following cases are considered:

1. $$F(x) = \|x\|^{2}_{2}$$
2. $$F(x) = \|x - b\|^{2}_{2}$$

Note

For case b) case we can use F = L2NormSquared().centered_at(b), see TranslateFunction.

Example: >>> F = L2NormSquared() >>> F = L2NormSquared(b=b) >>> F = L2NormSquared().centered_at(b) 
__call__(x)[source]

Returns the value of the L2NormSquared function at x.

Following cases are considered:

1. $$F(x) = \|x\|^{2}_{2}$$
2. $$F(x) = \|x - b\|^{2}_{2}$$
Param: $$x$$ $$\underset{i}{\sum}x_{i}^{2}$$
__init__(**kwargs)[source]

creator

Cases considered (with/without data):
1. $f(x) = \|x\|^{2}_{2}$
2. $f(x) = \|\|x - b\|\|^{2}_{2}$
Parameters: b (DataContainer, optional) – translation of the function
convex_conjugate(x)[source]

Returns the value of the convex conjugate of the L2NormSquared function at x.

Consider the following cases:

1. $F^{*}(x^{*}) = \frac{1}{4}\|x^{*}\|^{2}_{2}$
2. $F^{*}(x^{*}) = \frac{1}{4}\|x^{*}\|^{2}_{2} + <x^{*}, b>$
gradient(x, out=None)[source]

Returns the value of the gradient of the L2NormSquared function at x.

Following cases are considered:

1. $$F'(x) = 2x$$
2. $$F'(x) = 2(x-b)$$
proximal(x, tau, out=None)[source]

Returns the value of the proximal operator of the L2NormSquared function at x.

Consider the following cases:

1. $\mathrm{prox}_{\tau F}(x) = \frac{x}{1+2\tau}$
2. $\mathrm{prox}_{\tau F}(x) = \frac{x-b}{1+2\tau} + b$

### Least Squares¶

class ccpi.optimisation.functions.LeastSquares(A, b, c=1.0)[source]

Least Squares function

$F(x) = c\|Ax-b\|_2^2$

Parameters:

A : Operator

c : Scaling Constant

b : Data

L : Lipshitz Constant of the gradient of $$F$$ which is $$2c||A||_2^2 = 2s1(A)^2$$,

where s1(A) is the largest singular value of A.

__call__(x)[source]

Returns the value of $$F(x) = c\|Ax-b\|_2^2$$

__init__(A, b, c=1.0)[source]

Initialize self. See help(type(self)) for accurate signature.

gradient(x, out=None)[source]

Returns the value of the gradient of $$F(x) = c*\|A*x-b\|_2^2$$

$F'(x) = 2cA^T(Ax-b)$

### Mixed L21 norm¶

class ccpi.optimisation.functions.MixedL21Norm(**kwargs)[source]

MixedL21Norm function: $$F(x) = ||x||_{2,1} = \sum |x|_{2} = \sum \sqrt{ (x^{1})^{2} + (x^{2})^{2} + \dots}$$

where x is a BlockDataContainer, i.e., $$x=(x^{1}, x^{2}, \dots)$$

__call__(x)[source]

Returns the value of the MixedL21Norm function at x.

Parameters: x – BlockDataContainer
__init__(**kwargs)[source]

Initialize self. See help(type(self)) for accurate signature.

convex_conjugate(x)[source]

Returns the value of the convex conjugate of the MixedL21Norm function at x.

This is the Indicator function of $$\mathbb{I}_{\{\|\cdot\|_{2,\infty}\leq1\}}(x^{*})$$,

i.e.,

$\begin{split}\mathbb{I}_{\{\|\cdot\|_{2, \infty}\leq1\}}(x^{*}) = \begin{cases} 0, \mbox{if } \|x\|_{2, \infty}\leq1\\ \infty, \mbox{otherwise} \end{cases}\end{split}$

where,

$\|x\|_{2,\infty} = \max\{ \|x\|_{2} \} = \max\{ \sqrt{ (x^{1})^{2} + (x^{2})^{2} + \dots}\}$
proximal(x, tau, out=None)[source]

Returns the value of the proximal operator of the MixedL21Norm function at x.

$\mathrm{prox}_{\tau F}(x) = \frac{x}{\|x\|_{2}}\max\{ \|x\|_{2} - \tau, 0 \}$

where the convention 0 · (0/0) = 0 is used.

# Block Framework¶

Block Operator

## Block Function¶

A Block vector of functions, Size of vector coincides with the rows of $$K$$:

\begin{align}\begin{aligned}\begin{split}Kx = \begin{bmatrix} y_{1}\\ y_{2}\\ y_{3}\\ \end{bmatrix}, \quad f = [ f_{1}, f_{2}, f_{3} ]\end{split}\\f(Kx) : = f_{1}(y_{1}) + f_{2}(y_{2}) + f_{3}(y_{3})\end{aligned}\end{align}
class ccpi.optimisation.functions.BlockFunction(*functions)[source]

BlockFunction represents a separable sum function $$F$$ defined as

$F:X_{1}\times X_{2}\cdots\times X_{m} \rightarrow (-\infty, \infty]$

where $$F$$ is the separable sum of functions $$(f_{i})_{i=1}^{m}$$,

$F(x_{1}, x_{2}, \cdots, x_{m}) = \overset{m}{\underset{i=1}{\sum}}f_{i}(x_{i}), \mbox{ with } f_{i}: X_{i} \rightarrow (-\infty, \infty].$

A nice property (due to it’s separability structure) is that the proximal operator can be decomposed along the proximal operators of each function $$f_{i}$$.

$\mathrm{prox}_{\tau F}(x) = ( \mathrm{prox}_{\tau f_{i}}(x_{i}) )_{i=1}^{m}$

In addition, if $$\tau := (\tau_{1},\dots,\tau_{m})$$, then

$\mathrm{prox}_{\tau F}(x) = ( \mathrm{prox}_{\tau_{i} f_{i}}(x_{i}) )_{i=1}^{m}$
__call__(x)[source]

Returns the value of the BlockFunction $$F$$

$F(x) = \overset{m}{\underset{i=1}{\sum}}f_{i}(x_{i}), \mbox{ where } x = (x_{1}, x_{2}, \cdots, x_{m}), \quad i = 1,2,\dots,m$

Parameter:

x : BlockDataContainer and must have as many rows as self.length

returns ..math:: sum(f_i(x_i))

__init__(*functions)[source]

Initialize self. See help(type(self)) for accurate signature.

convex_conjugate(x)[source]

Returns the value of the convex conjugate of the BlockFunction at $$x^{*}$$.

$F^{*}(x^{*}) = \overset{m}{\underset{i=1}{\sum}}f_{i}^{*}(x^{*}_{i})$

Parameter:

x : BlockDataContainer and must have as many rows as self.length
gradient(x, out=None)[source]

Returns the value of the gradient of the BlockFunction function at x.

$F'(x) = [f_{1}'(x_{1}), ... , f_{m}'(x_{m})]$

Parameter:

x : BlockDataContainer and must have as many rows as self.length
proximal(x, tau, out=None)[source]

Proximal operator of the BlockFunction at x:

$\mathrm{prox}_{\tau F}(x) = (\mathrm{prox}_{\tau f_{i}}(x_{i}))_{i=1}^{m}$

Parameter:

x : BlockDataContainer and must have as many rows as self.length
proximal_conjugate(x, tau, out=None)[source]

Proximal operator of the convex conjugate of BlockFunction at x:

$\mathrm{prox}_{\tau F^{*}}(x) = (\mathrm{prox}_{\tau f^{*}_{i}}(x^{*}_{i}))_{i=1}^{m}$

Parameter:

x : BlockDataContainer and must have as many rows as self.length

## Block DataContainer¶

\begin{align}\begin{aligned}x = [x_{1}, x_{2} ]\in (X_{1}\times X_{2})\\y = [y_{1}, y_{2}, y_{3} ]\in(Y_{1}\times Y_{2} \times Y_{3})\end{aligned}\end{align}
class ccpi.framework.BlockDataContainer(*args, **kwargs)[source]

Class to hold DataContainers as column vector

Provides basic algebra between BlockDataContainer’s, DataContainer’s and subclasses and Numbers

1. algebra between BlockDataContainers will be element-wise, only if the shape of the 2 BlockDataContainers is the same, otherwise it will fail
2. algebra between BlockDataContainers and list or numpy array will work as long as the number of rows and element of the arrays match, indipendently on the fact that the BlockDataContainer could be nested
3. algebra between BlockDataContainer and one DataContainer is possible. It will require that all the DataContainers in the block to be compatible with the DataContainer we want to algebra with. Should we require that the DataContainer is the same type? Like ImageData or AcquisitionData?
4. algebra between BlockDataContainer and a Number is possible and it will be done with each element of the BlockDataContainer even if nested

A = [ [B,C] , D] A * 3 = [ 3 * [B,C] , 3* D] = [ [ 3*B, 3*C] , 3*D ]

__iadd__(other)[source]

__idiv__(other)[source]

Inline division

__imul__(other)[source]

Inline multiplication

__isub__(other)[source]

Inline subtraction

__iter__()[source]

BlockDataContainer is Iterable

__itruediv__(other)[source]

Inline truedivision

__radd__(other)[source]

to make sure that this method is called rather than the __mul__ of a numpy array the class constant __array_priority__ must be set > 0 https://docs.scipy.org/doc/numpy-1.15.1/reference/arrays.classes.html#numpy.class.__array_priority__

__rdiv__(other)[source]

Reverse division

to make sure that this method is called rather than the __mul__ of a numpy array the class constant __array_priority__ must be set > 0 https://docs.scipy.org/doc/numpy-1.15.1/reference/arrays.classes.html#numpy.class.__array_priority__

__rmul__(other)[source]

Reverse multiplication

to make sure that this method is called rather than the __mul__ of a numpy array the class constant __array_priority__ must be set > 0 https://docs.scipy.org/doc/numpy-1.15.1/reference/arrays.classes.html#numpy.class.__array_priority__

__rpow__(other)[source]

Reverse power

to make sure that this method is called rather than the __mul__ of a numpy array the class constant __array_priority__ must be set > 0 https://docs.scipy.org/doc/numpy-1.15.1/reference/arrays.classes.html#numpy.class.__array_priority__

__rsub__(other)[source]

Reverse subtraction

to make sure that this method is called rather than the __mul__ of a numpy array the class constant __array_priority__ must be set > 0 https://docs.scipy.org/doc/numpy-1.15.1/reference/arrays.classes.html#numpy.class.__array_priority__

__rtruediv__(other)[source]

Reverse truedivision

to make sure that this method is called rather than the __mul__ of a numpy array the class constant __array_priority__ must be set > 0 https://docs.scipy.org/doc/numpy-1.15.1/reference/arrays.classes.html#numpy.class.__array_priority__

__weakref__

list of weak references to the object (if defined)

add(other, *args, **kwargs)[source]

Algebra: add method of BlockDataContainer with number/DataContainer or BlockDataContainer

Param: other (number, DataContainer or subclasses or BlockDataContainer out (optional): provides a placehold for the resul.
axpby(a, b, y, out, dtype=<class 'numpy.float32'>, num_threads=2)[source]

performs axpby element-wise on the BlockDataContainer containers

Does the operation .. math:: a*x+b*y and stores the result in out, where x is self

Parameters: a – scalar b – scalar y – compatible (Block)DataContainer out – (Block)DataContainer to store the result dtype – optional, data type of the DataContainers
binary_operations(operation, other, *args, **kwargs)[source]

Algebra: generic method of algebric operation with BlockDataContainer with number/DataContainer or BlockDataContainer

Provides commutativity with DataContainer and subclasses, i.e. this class’s reverse algebric methods take precedence w.r.t. direct algebric methods of DataContainer and subclasses.

This method is not to be used directly

copy()[source]

alias of clone

divide(other, *args, **kwargs)[source]

Algebra: divide method of BlockDataContainer with number/DataContainer or BlockDataContainer

Param: other (number, DataContainer or subclasses or BlockDataContainer out (optional): provides a placehold for the resul.
is_compatible(other)[source]

basic check if the size of the 2 objects fit

maximum(other, *args, **kwargs)[source]

Algebra: power method of BlockDataContainer with number/DataContainer or BlockDataContainer

Param: other (number, DataContainer or subclasses or BlockDataContainer out (optional): provides a placehold for the resul.
minimum(other, *args, **kwargs)[source]

Algebra: power method of BlockDataContainer with number/DataContainer or BlockDataContainer

Param: other (number, DataContainer or subclasses or BlockDataContainer out (optional): provides a placehold for the resul.
multiply(other, *args, **kwargs)[source]

Algebra: multiply method of BlockDataContainer with number/DataContainer or BlockDataContainer

Param: other (number, DataContainer or subclasses or BlockDataContainer out (optional): provides a placehold for the resul.
next()[source]

python2 backwards compatibility

power(other, *args, **kwargs)[source]

Algebra: power method of BlockDataContainer with number/DataContainer or BlockDataContainer

Param: other (number, DataContainer or subclasses or BlockDataContainer out (optional): provides a placehold for the resul.
subtract(other, *args, **kwargs)[source]

Algebra: subtract method of BlockDataContainer with number/DataContainer or BlockDataContainer

Param: other (number, DataContainer or subclasses or BlockDataContainer out (optional): provides a placehold for the resul.
unary_operations(operation, *args, **kwargs)[source]

Unary operation on BlockDataContainer:

generic method of unary operation with BlockDataContainer: abs, sign, sqrt and conjugate

This method is not to be used directly

Return Home