<< . .

( 16)

. . >>

Optimal control theory has been important in finance (Islam and Craven [38,
2002]; Taperio [81, 1998]; Ziemba and Vickson [89, 1975]; Senqupta and Fan-
chon [80, 1997]; Sethi and Thompson [79, 2000]; Campbell, Lo and MacKin-
lay [7, 1997]; Eatwell, Milgate and Neuman [25, 1989]). During nearly fifty
years of development and extension of optimal control theories, they have been
successfully used in finance. Many famous models effectively utilize optimal
control theories. However, with the increasing requirements of more workable
and accurate solutions to optimal control problems, there are many real-world
problems which are too complex to lead to analytical solutions. Computational
algorithms therefore become essential tools for most optimal control problems
including dynamic optimization models in finance.
Optimal control modeling, both deterministic and stochastic, is probably
one of the most crucial areas in finance given the time series characteristics
of financial systems™ behavior. It is also a fast growing area of sophisticated
academic interest as well as practice using analytical as well as computational
techniques. However, there are some limits in some areas in the existing lit-
erature in which improvements are needed. It will facilitate the discipline if
dynamic optimization in finance is to be at the same level of development
in modeling as the modeling of optimal economic growth (Islam [36, 2001];
Chakravarty [9, 1969]; Leonard and Long [52, 1992]). These areas are: (a)
specification of the element of the dynamic optimization models; (b) the struc-
ture of the dynamic financial system; (c) mathematical structure; and (d) com-
putational methods and programs. While Islam and Craven [38, 2002] have
recently made some extensions to these areas, their work does not explicitly
focus on bang-bang control models in finance. The objective of this book is
to present some suggested improvements in modeling bang-bang control in

finance in the deterministic optimization strand by extending the existing liter-
In this chapter, a typical general financial optimal control model is given in
Section 1.1 to explain the formula of the optimal control problems and their
accompanying optimal control theories. In addition, some classical concepts
in operations research and famous standard optimal control theories are intro-
duced in Section 1.2-1.5, and a brief description on how they are applied in
financial optimal control problems is also discussed. In Section 1.6, some im-
provements that are needed to meet the higher requirements for the complex
real-world problems are presented. In Section 1.7, the algorithms on similar
optimal control problems achieved by other researchers are discussed. Critical
comparisons of the methods used in this research and those employed in oth-
ers™ work are made, and the advantages and disadvantages between them are
shown to motivate the present research work.

1. An Optimal Control Model of Finance
Consider a financial optimal control model:

subject to:

Here is the state, and is the control. The time T is the “planning
horizon”. The differential equation describes the dynamics of the
financial system; it determines from It is required to find an optimal
which minimizes (We may consider as a cost function.) Al-
though the control problem is stated here (and other chapters in this book) as a
minimization problem, many financial optimization models are in a maximiza-
tion form. Detailed discussion of control theory applications to finance may be
seen in Sethi and Thompson [79, 2000].
Often, is taken as a piece-wise continuous function (note that jumps
are always needed if the problem is linear in the control to reach an op-
timum), and then is a piece-wise smooth function. A financial optimal
control model is represented by the formula (1.1)-(1.4), and can always use
the “maximum principle” in Pontryagin theory [69, 1962]. The cost function
is usually the sum of the integral cost and terminal cost in a standard optimal
Optimal Control Models 3

control problem which can be found in references [1, 1988], [5, 1975], and
[8, 1983]. However, for a large class of reasonable cases, there are often no
available results from standard theories. A more acceptable method is needed.
In Blatt [2, 1976], there is some cost added when switching the control. The
cost can be wear and tear on the switching mechanism, or it can be as the cost
of “loss of confidence in a stop-go national economy”. The cost is associated
with each switching of control. The optimal control problem in financial deci-
sion making with a cost of switching control can be described as follows:

subject to:

Here, is the positive cost. is an integer, representing the number of
times the control jumps during the planning period [0, T]. In particular,
may be a piece-wise constant, thus it can be approximated by a step-function.
Only fixed-time optimal control problems are considered in this book which
means T is a constant. Although time-optimal control problems are also very
interesting, they have not been considered in this research.
The essential elements of an optimal control model (see Islam [36, 2001])
are: (i) an optimization criteria; (ii) the form of inter-temporal time preference
or discounting; (iii) the structure of dynamic systems-under modeling; and(iv)
the initial and terminal conditions. Although the literature on the method-
ologies and practices in the specification of the elements of an optimal con-
trol model is well developed in various areas in economics (such as optimal
growth, see Islam [36, 2001]; Chakravarty [9, 1969]), the literature in finance
in this area is not fully developed (see dynamic optimization modeling in Ta-
perio [81, 1998]; Sengupta and Fanchon [80, 1997]; Zieamba and Vickson
[89, 1975]). The rationale for and requirements of the specification of the
above four elements of dynamic optimization financial models are not pro-
vided in the existing literature except in Islam and Craven [38, 2002]. In the
present study, the main stream practices are adopted: (i) an optimization crite-
ria (of different types in different models); (ii) the form of inter-temporal time
preference-positive or zero discounting; (iii) the structure of dynamic systems

under modeling (different types “ linear and non-linear); and (iv) the initial
and terminal conditions “ various types in different models.
Optimal control models in finance can take different forms including the
following: bang-bang control, deterministic and stochastic models, finite and
infinite horizon models, aggregative and disaggregative, closed and open loop
models, overtaking or multi-criteria models, time optimal models, overlapping
generation models, etc.(see Islam [36, 2001]).
Islam and Craven [38, 2002] have proposed some extensions to the method-
ology of dynamic optimization in finance. The proposed extensions in the
computation and modeling of optimal control in finance have shown the need
and potential for further areas of study in financial modeling. Potentials are
in both mathematical structure and computational aspects of dynamic opti-
mization. These extensions will make dynamic financial optimization models
relatively more organized and coordinated. These extensions have potential ap-
plications for academic and practical exercises. This book reports initial efforts
in providing some useful extensions; further work is necessary to complete the
research agenda.
Optimal control models have applications to a wide range of different areas
in finance: optimal portfolio choice, optimal corporate finance, financial engi-
neering, stochastic finance, valuation, optimal consumption and investment,
financial planning, risk management, cash management, etc. (Tapiero [81,
1998]; Sengupta and Fanchon [80, 1997]; Ziemba and Vickson [89, 1975]).
As it is difficult to cover all these applications in one volume, two important
areas in financial applications of optimal control models “ optimal investment
planning for the economy and optimal corporate financing “ are considered in
this book.

2. (Karush) Kuhn-Tucker Condition
The Kuhn-Tucker Condition condition is the necessary condition for a local
minimum of a minimization problem (see [14, 1995]). The Hamiltonian of
the Pontryagin maximum principle is based on a similar idea that deals with
optimal control problems.
Consider a general mathematical programming problem:

subject to:
Optimal Control Models 5

The Lagrangian is:

The (Karush-) Kuhn-Tucker conditions necessary for a (local) minimum
of the problem at are that Lagrange multipliers and exist, for
which satisfies the constraints of the problem, and:

The inequality constraints are written here as then the corresponding
at the minimum; the multipliers of the equality constraints can take any
sign. So for some minimization problems where the inequality constraints are
represented as the sign of the inequalities should be changed first. Then
the KKT condition can be applied.
The conditions for global minimum are that the objective and constraint
functions are differentiable, and satisfy the constraint qualifications and are
convex and concave respectively. If these functions are strictly convex then the
minimum is also a unique minimum. Further extensions to these conditions
have been made in the cases when the objective and constraint functions are
quasi-convex and invex (see Islam and Craven [37, 2001]).
Duality properties of the programming models in finance not only provide
useful information for computing purposes, but also for determining efficiency
or show prices of financial instruments.

3. Pontryagin Theorem
The Pontryagin theorem was first introduced in Pontryagin [69, 1962].
Consider a minimization problem in finance given as follows:

T is planning horizon, subject to a differential equation and some constraint:

Here (1.23) represents the differential equation. (1.24) represents the con-
straint on control
Let the optimal control problem (1.22) reach a (local) minimum at
with respect to the for Assume that and are partially Fr©chet
differentiable with respect to uniformly in near The Hamiltonian is
shown as follows:

The necessary conditions for the minimum are:
(a) a co-state function satisfies the adjoint differential equation:

with boundary condition.
(b) the associated problems minimize at for all except at a null set.
Optimal Control Models 7

4. Bang-Bang Control
In some optimal control problems, when the dynamic equation is linear in
the control bang-bang control (the control only jumps on the extreme
points in the feasible area of the constraints on the control) is likely to be
optimal. Here a small example is used to explain this concept. Consider the
following constraints on the control:

In this case the control is restricted to the area of a
triangle. The control, which is only staying on the vertices of the triangle
and also jumping from one vertex to the other in successive switching time
intervals, is the optimal solution. This kind of optimal control is called bang-
bang control. The concept can also be used to explain a control:

where the optimal control only takes two possible cases or depending
on the initial value of the control. This control is also a bang-bang control.
In this research, only bang-bang optimal control models in finance are con-
sidered. Sometimes, a singular arc (see Section 1.5) might occur following a
bang-bang control in a particular situation. So the possibility of a singular arc
occurring should always be considered after a bang-bang control is obtained in
the first stage.

5. Singular Arc
As mentioned earlier, when the objective function and dynamic equation
are linear in the control, a singular arc might occur following the bang-bang
control. In that case, the co-efficient of control in the associated problem
equals zero, thus the control equals zero. So after discovering a bang-bang
control solution, it is necessary to check whether a singular arc exists.
In what kind of situation will the singular arc occur? Only when the co-
efficient of in the associated problem may happen to be identically zero
for some time interval of time say The optimum path for such an
interval is called a singular arc; on such an arc, the associated problem
only gives A singular arc is very common in real-world trajectory
following problems.

6. Indifference Principle
In Blatt [2, 1976], certain financial optimal control models that are con-
cerned with optimal control with a cost of switching control were discussed,
and an optimal policy was proved to exist. Also, the maximum principle of the
Pontryagin theory was replaced by a weaker condition theorem (“indifference
principle”), and several new theories were developed for solving the optimal
control problems with cost of changing control. This research is dealing with
an optimal control problem with a cost of changing control. Although the
“indifference principle” theory is not being employed for solving the optimal
control problems in this study, it is still very important to be introduced for
understanding the ideas of this research.
Consider a financial optimal control model as follows:

subject to:


where or 1 is the control setting at time the non-negative
integer is the number of times control alters during time horizon T; and the
are the times at which control alters, satisfying:
Optimal Control Models 9

Given the policy P, the control function is shown in (1.30). Now the
Hamiltonian of the Pontryagin theory is constructed as follows:

The co-state equation is:

The end-point condition:

Theorem 2. An admissible optimal policy exists. See proof in reference [2,
Theorem 5. The indifference principle:
Let P(1.32) be an optimal policy. Let H and be defined by (1.34),
(1.35) and (1.36). Then at each switching time of The
Hamiltonian H is indifferent to the choice of the control, which is:

The relationship between the “maximum principle” and the “indifference
principle” is that the “indifference principle” can be implied by the “maxi-
mum principle”. The “maximum principle” makes a stronger condition, that
is, the control is forced to switch by the “maximum principle” when the phase
space orbit crosses the indifference curve (1.37), while the control is allowed
to change by the “indifference principle” at the same point. That means the
control can stay the same in a region of phase space and also be optimal. It
is not allowed to change the value of control until reaching the indifference
curve (1.37) again. It is workable even though the new theory requires more
candidate optimal control paths. When a cost of a switching control exists, the

“maximum principle” should be replaced by the “indifference principle”, and
optimal control will also exist. The existence of an optimal solution can be
proved by Theorem 2 in Blatt™s paper.
This research originated from Blatt™s work. The goal of this book involves
some novel computational algorithms for solving (1.28-1.32) based on the the-
orems in Blatt™s paper. The work on the cost analysis, difference of optimal
control policy sequences and division of the time intervals are extended from
Blatt™s original work. The methods, which could avoid the solution staying at
local minimum without reaching the global minimum, are also discussed. This
book is more concerned with using the computer software package to solve the
problems rather than solutions analysis.
There are some computing optimal control methods (see next section) which
have been successfully applied in many fields. They involve subdividing the
interval [0, T] into many (usually equal) subintervals. However, the accuracy
will be lost if the switching times do not lie on the end-points of the equal
subintervals. Hence it is essential to compute the optimal switching times.

7. Different Approaches to Optimal Control Problems
With the advances of modern computers and rapid development of software
engineering, more and more people are concerned with computational algo-
rithms, which could shorten computing time and provide more accurate results
for complex optimal control problems that are difficult to solve analytically.
During last thirty years, many efficient approaches have been developed and
successfully applied in many models in a wide range of fields. Several nu-
merical methods are available in the references ([24, 1981]; [56, 1975]; [57,
1986]; [58, 1986]; [76, 1981]; [75, 1980]; [84, 1991]; [85, 1991]), while some
typical computing optimal control problems and efficient computational algo-
rithms relevant to the present study will be discussed in the next few sections,
the general computational approaches and algorithms for optimal control may
be classified as the following (Islam [36, 2001]).
There is a wide range of algorithms which can be used for computing op-
timal control models in finance and can be classified under the algorithms for
continuous and discrete optimal control models (Islam [36, 2001]). Algorithms
for continuous optimal control models in finance include: (i) gradient search
methods; (ii) algorithms based two value boundary problems; (iii) dynamic
programming, approximate solution methods (steady-state solution, numerical
methods based on approximation and perturbation, contraction mapping based
algorithm and simulation); and (iv) control discretization approach based on
step function, spline etc. Algorithms for discrete optimal control models in
finance may be classified as follows: (i) algorithms based on linear and non-
linear programming solution methods; (ii) algorithms based on the difference
equations of the Pontryagin maximum principle and solved as a two value-
Optimal Control Models 11

boundary problem; (iii) gradient search method; (iv) approximation methods;
and (v) dynamic programming.
For computing optimal control finance models, some recently developed
computer packages such as SCOM, MATLAB, MATHEMATICA, DUAL, RI-
OTS, MISER and OCIM can be used.

7.1 OCIM
In reference [15, Craven 1998], a FORTRAN computer software package
OCIM (Optimal Control in Melbourne) was discussed for solving a class of
fixed-time optimal control problems. This computational method is based on
the Augmented Lagrangian algorithm which was discussed in Section 6.3.2 in
Craven [14, 1995]. Powell and Hestenes first made the Augmented Lagrangian
for a problem with equality constraints. Rockafellar extended it to inequality
constraints in paper [74, 1974]. OCIM can be run on a Macintosh as well as
The basic idea of this method is to divide the time interval [0, T] into
subintervals. The control is approximated by a step-function as in MISER
[33, 1987], with (constant) on subinterval
In MISER [33, 1987], Goh and Teo had obtained good numerical results us-
ing the apparently crude approximation of by a step-function, which is
called “control parameterization technique”. In Craven [14, 1995], the the-
ory in Section 7.6 and 7.8 proves that this occurs when the control system
acts as a suitable low-pass filter; also the smoothing effect of integrating the
state differential equation will often have this result. Increasing the number
of subdivisions will lead to a greater accuracy. Note in the calculation of
the co-state equation for the interpolation of the values of state is
required. It is done by linear interpolation of the value of at subdivision
points The linear interpolation is also used for calculating the gra-
dient in this research.

<< . .

( 16)

. . >>

Copyright Design by: Sunlight webdesign