LINEBURG

the approximated control to solve the state equation, obtain the state values

taken at the grid-points of the subintervals; second solve the co-state equation

with respect to thirdly, calculate the augmented Lagrangian; and lastly,

calculate the gradients of the cost function respect to This algorithm had

been programmed in FORTRAN. Note that jumps in co-states are not yet im-

plemented. Unconstrained minimization is done using the CONMIN package

[70], which uses the BFGS quasi-Newton algorithm [28, 1980].

Using the augmented Lagrangian allows gradients to be calculated by a sin-

gle adjoint differential equation, and instead of having to code a separated

formula for each constraint. In this algorithm, a time interval [0, T] is divided

into equal subintervals. A diversity of control problems can be solved by this

method, but the equality constraint like presents some

12 OPTIMAL CONTROL MODELS IN FINANCE

problems with OCIM. Several famous models were successfully solved by this

method. They are a damped oscillator (the approaches developed in this re-

search also solved an example of an oscillator problem), the Dodgem problem

[29, 1975], and the Fish problem [12, 1976]. In Craven™s paper, there was

a useful non-linear transformation of the time scale that can effectively give a

good subdivision in big time ranges, without requiring a large number of subdi-

visions, and also avoid computing problems. The details of the transformation

will be described in Section 2.3.

7.2 RIOTS 95

Relating to another computing optimal control software package SCOM,

which will be introduced in Section 1.7.5, the RIOTS (Recursive Integration

Optimal Trajectory Solver 95) package [77, 1997] also runs on MATLAB . It

solves optimal control problems by solving (in various ways) the differential

equations that lead to function values and gradients, and then applying a mini-

mizer package. The computing scheme is similar to the one that was described

by Craven [14, 1995] in Section 6.4.5. This package obtains good accuracy

of the switching times by using a sufficiently small subdivision of the time

interval.

7.3 A computational approach for the cost of changing

control

A computational method based on the control parameterization technique

[84, 1991] was introduced in reference [82, 1992] for solving a class of optimal

control problems where the cost function is the sum of integral cost, terminal

cost, and full variation of a control. The full variation of a control is used to

measure changes on the control action.

This method involves three stages of approximation. In the first stage, the

problem is approximated by a sequence of approximation problems based on

the control parameterization technique. Every approximation problem involv-

ing full variation of control is an optimal parameter selection problem. They

become non-smooth functions in the form of “ norm. In the second stage,

the non-smooth functions are smoothen by a smoothing technique [83, 1988].

Another smoothing technique [42, 1991] is used to approximate the continuous

state inequality constraints by a sequence of conventional canonical constraints

in stage three. All these approximation problems are standard optimal param-

eter selection problems which can be solved by the general optimal control

software package MISER3 [40, 1991]. The method is supported by the con-

vergence properties. A calculation of a fishery harvesting model (which was

computed in reference [41, 1990] with penalty was involved to illus-

trate the method, and the chosen value penalty term was explained in this

Optimal Control Models 13

paper. A similar situation is also discussed in this book, which is called the

chosen cost, in Section 2.8. An application of an optimal control problem in

which the cost function does not have the full variation of control can also

utilize this method by adjusting the penalty constant appropriately to obtain a

smoother control without changing the optimum of the original cost function.

There are some relevant references [55, 1987], [2, 1976], [66, 1977] for this

problem.

7.4 Optimal switching control computational method

In Li [53, 1995], a computational method was developed for solving a class

of optimal control problems involving the choice of a fixed number of switch-

ing time points which divide the system™s time horizon [0, T] into

K time periods. A subsystem is selected for each time periods

from a finite number of given candidate subsystems, which run in that time

period. All the K subsystems and K “ 1 switching times will be taken as de-

cision variables. These decision variables form candidate selection systems ,

which will lead the optimal control problem to a minimum. Here, the optimal

control problem is only considered over a finite time horizon. In this method,

the original problem is transformed into an equivalent optimal control problem

with system parameters by using some suitable constraints on the co-efficient

of the linear combination, which is formed by the candidate subsystems, and

using a time re-scaling technique.

Many control problems related to the system dynamics that are subject to

sudden changes can be put into this model. In recent years, general optimal

switching control problems have been studied. The optimality principle is ap-

plied and existence of optimal controls is discussed. Basically the problems

are formulated as optimal control problems , in which the feasible controls are

determined by appropriate switching functions. There are relevant references

available in the bibliography [53, 1995], [87, 1989], [88, 1991], [27, 1979].

This situation has problems involving both discrete and continuous deci-

sions represented by the subsystems and switching times. A transformed tech-

nique is introduced for solving this mixed discrete and continuous optimal con-

trol problem. The basic idea behind this technique is transforming the mixed

continuous-discrete optimal control problem into an optimal parameter selec-

tion problem [84, 1991], which only deals with continuous decision variables.

Since the transformed problem still involves the switching times located within

subdivisions, which make the numerical solution of such an optimal control

problem difficult. Another transformation is introduced to further transform

the problem into an optimal control problem with system parameters. The

control is taken as the lengths of the switching intervals as parameters. The

second equivalent optimal control problem can be solved by standard optimal

14 OPTIMAL CONTROL MODELS IN FINANCE

control techniques. A numerical problem was solved in Li™s paper [53, 1995]

by using this computational method.

This kind of optimal control problem has sudden changes in the dynamics at

switching time, and therefore has a mixed continuous-discrete nature. Switch-

ing times, a time scaling and a control function are introduced to deal with

the discontinuities in the system dynamics. The control function is a piece-

wise constant function with grid-points corresponding the discontinuities of

the original problem, hence allowing the general optimal control software to

solve the problem.

7.5 SCOM

In Craven and Islam [18, 2001] (See also Islam and Craven [38, 2002]), a

class of optimal control problems in continuous time were solved by a com-

puter software package called SCOM, also using the MATLAB system. As

in the MISER [33, 1987] and OCIM [15, 1998] packages, the control is ap-

proximated by a step-function. Because of the matrix features in MATLAB,

programming is made easier. Finite difference approximations for gradients

give some advantages for computing gradients. In this paper, end-point con-

ditions and implicit constraints in some economic models are simply handled.

Consider an optimal control problem of the form:

subject to:

The end-point term and constraints can be handled by a penalty term; its de-

tailed description will be introduced in Section 4.4. The concern here is only

with how this computational algorithm works. The differential equation (1.39)

with initial condition, determines from Denote

The interval [0,1] is then divided into N equal subintervals, and is ap-

proximated by a step-function taking values on the succes-

sive subintervals. An extra constraint is added to the given problem,

where V is the subspace of such step-functions. Consequently, becomes a

polynomial function, determined by its values at the grid-points

Since there are discontinuities at the grid-points on

the right side of the differential equation which is a non-smooth function of

Optimal Control Models 15

a suitable differential equation solver must be chosen for such functions.

Many standard solvers do not have this feature. Only one of the six ODE

solvers in MATLAB is designed for stiff differential equations. However, this

ODE solver is not used in this book. Instead, a good computer software pack-

age “nqq” is used to deal with the jumps on the right side of the differential

equation in the dynamic system of the optimal control problems. The fourth

order Runge-Kutta method [30, 1965] (a standard method for solving ordinary

differential equations) is slightly modified for solving a differential equation of

the form where is a step-function. The modification

is simply recording the counting number and time to make sure that

always takes the appropriate value not in subinterval

when In the computation, the differential equation is com-

puted forward (starting at while the adjoint equation is solved backward

(starting at

Two steps are introduced to solve such optimal control problems:

1 Compute objective values from the objective function, differential equation,

and augmented Lagrangian, not compute gradients from the adjoint equa-

tion and Hamiltonian. That assumes gradients can be estimated by finite

differences from what to be computed.

2 Compute objective values and gradients from the adjoint equation and Hamil-

tonian.

Implicit Constraints: in some economic models, such as the model [48,

1971 ] to which Craven and Islam have applied the SCOM package in the paper

[18, 2001], fractional powers of the functions (with in the right side of

the differential equation where appear, then some choices of

will lead to causing the solver to crash. The requirement of

forms an implicit constraint. A finite-difference approximation to gradients

is useful as approximations over a wider domain in this case. As mentioned

in Section 1.7.1, linear interpolation can also be used in solving gradients and

co-state functions. Increasing the number of the subintervals N will get better

results. It will be discussed later in Section 2.7 and 2.8.

Two problems were tested using SCOM in a paper by Craven and Islam

[18, 2001], and accurate results were found for both. In Craven and Islam [18,

2001], “constr” estimated gradients by finite differences to compute the opti-

mum. The economic models in this paper [18, 2001] with implicit constraints

are different from the models that were solved by other computer software

packages. However, this computer software also needs to be further developed

for more general use.

16 OPTIMAL CONTROL MODELS IN FINANCE

7.6 Switching Costs Model

An investment model for the natural resource industry was introduced in

Richard and Mihall™s paper [73, 2001] with switching cost. The problem com-

bines both absolutely continuous and impulse stochastic control. In particu-

lar, the control strategy involves a sequence of interventions at discrete times.

However, this component of the control strategy does not have all the features

of impulse control because the sizes of the jumps associated with each inter-

ventions strategy are not part of the control strategy but are constrained to the

pattern...,1, “1,1, “1,..., jumping between two levels. This kind of control

patterns is also considered in this research.

Perthame [68, 1984] first introduced the combination of impulse and abso-

lutely continuous stochastic control problems which have been further studied

by Brekke and B.˜ksendal [3, 1991]. Mundaca and ˜ksendal [62, 1998] and

Cadenillas and Zapatero [6, 2000] work on the applications to the control of

currency exchange rates.

7.7 CPET

Time optimal control problems (which are not considered in this research)

with bang-bang control associated with or without singular arc solutions can

make the calculation difficult. A novel problem transformation called the Con-

trol Parameterization Enhancing Transform (CPET) was introduced in refer-

ence [51, 1997] to provide a computationally simple and numerically accurate

solution without assuming that the optimal control is pure bang-bang control

for time optimal control problems. A standard parameterization algorithm can

calculate the exact switching times and singular control values of the original

problem with CPET. Also, with the CPET, switching points for the control

match the control parameterization knot points naturally, and hence piece-wise

integration can be conveniently and accurately carried out in the usual control

parameterization manner.

Several models used this technique and gave numerical results with ex-

tremely high accuracy. They are F-8 flight aircraft (first introduced in reference

[31, 1977]) [71, 1994], the well-known “dodgem car” problem [29, 1975], and

a stirred tank mixer [34, 1976]. The generalizations of CPET technique to a

large range of problems were also introduced in reference [72, 1999].

7.8 STC

A new control method, the switching time computation (STC) method,

which finds a suitable concatenation of constant-input arcs (or, equivalently,

the places of switchings) that would take a given single-input non-linear sys-

tem from a given initial point to the target, was first introduced in Kaya and

Noakes™ paper [43, 1994]. It was also described in detail of the mathematical

Optimal Control Models 17

reasoning in paper [44, 1996]. The method is applicable to single-input non-

linear systems. It finds the switching times for a piecewise-constant input with

a given number of switchings. It can also be used for solving the time-optimal

bang-bang control problem. The TOBC algorithm, which is based on the STC

method, is given for this purpose. Since the STC method is basically designed

for a non-linear control system, the problem of the initial guess is equally dif-

ficult when it is applied to a linear system. For the optimization procedure,

an improper guess for the arc times may cause the method to fail in linear

systems. However, the initial guess can be improved by experience from the

failures. In non-linear systems, there does not exist a scheme for ˜guessing™ a

proper starting point in general optimization procedures. In general, the STC

method handles a linear or a non-linear system without much discrimination.

The reason is that the optimization is carried out in arc time space and even a

linear system has a solution that is complicated in arc time space. The STC

method has been applied as part of the TOBC algorithm to two ideal systems

and a physical system (F-8 aircraft). They have been shown to be fast and ac-

curate. The comparisons with results obtained through MISER3 software have

demonstrated the efficiency of the STC method, both in its own right in find-

ing bang-bang controls and in finding time-optimal bang-bang controls when

incorporated in the TOBC algorithm. There is also a possibility to generalize

the system from a single-input system to the multi-input system, which needs

more computer programming involving.

7.9 Leap-frog Algorithm

Pontryagin™s Maximum Principle gives the necessary conditions for opti-

mality of the behavior of a control system, which requires the solution of a

two-point boundary-value problem (TPBVP). Noakes [64, 1997] has devel-

oped a global algorithm for finding a geodesic joining two given points on a

Riemannian manifold. A geodesic problem is a special type of TPBVP. The

algorithm can be viewed as a solution method for a special type of TPBVP.

It is simple to implement and works well in practice. The algorithm is called

the Leap-Frog Algorithm because of the nature of its geometry. Application

of the Leap-Frog Algorithm to optimal control was first announced in Kaya

and Noakes [45, 1997]. This algorithm gave promising results when it was ap-

plied to find an optimal control for a class of systems with unbounded control.

In Kaya and Noakes™ paper [46, 1998], a direct and intuitive implementation

of the algorithm for general non-linear systems with unbounded controls has

been discussed. This work gave a more detailed and extended account of the

announcement. A theoretical analysis of the Leap-Frog Algorithm for a class

of optimal control problems with bounded controls in the plane was given in

Kaya and Noakes™ paper [47, 1998]. The Leap-Frog Algorithm assumes that

the problem is already solved locally. This requirement translates to the case

18 OPTIMAL CONTROL MODELS IN FINANCE

of optimal control as the availability of a local solution of the problem. This is

related to the structure of the small-time reachable sets.

7.10 An obstruction optimal control problem

In Craven [17, 1999], an optimal control problem relating to flow around an

obstacle (original proposed by Giannesi [32, 1996]) can be treated as a mini-

mization problem which leads to a necessary condition or an optimal control

problem. In this paper, Craven gave the discretization augment, and proved

that, if an optimal path exists, the Pontryagin principle can be used to calculate

the optimum. The optimum was verified to be reached by a discretization of

the problem, and was also proved to be a global minimum.

7.11 Computational approaches to stochastic optimal

control models in finance

Computational approaches specific to stochastic financial optimal control

models are relatively well developed in the literature. However, computational

approaches to deterministic financial optimal control models are not well doc-

umented in the literature, the standard general computational approaches to

optimal control discussed above are applied to financial models as well. Some

discussion of the computational approaches with specific applications to fi-

nance may be seen in Islam and Craven [38, 2002].

7.12 Comparisons of the methods

While a discussion of the comparisons of the general computational ap-

proaches is provided below, such comparisons are also relevant when the gen-

eral approaches are applied to financial models. In financial optimal control

models, the control function is approximated by a vector on some vec-

tor space of finite dimension in all algorithms for numerical computation of

such an optimal control model 1.38-1.40. There are some examples with

different chosen approximations. The RIOTS 95 package [77, 1997] which

uses MATLAB, uses various spline approximations, solves the optimization

problems by projected descent methods; MISER3 [33, 1987], uses a step-

function to approximate the control function, solves the optimization problems

by sequential quadratic programming; OCIM [15, 1998], uses conjugate gra-

dient methods. Different implementations behave differently especially on the

functions defined on a restricted domain, since some optimization methods

might want to search the area outside the domain. Although a step-function

is obviously a crude approximation, it produces accurate results shown in

many instants in reference [84, 1991]. Since integrating the dynamic equation

to obtain is a smooth operation, the high-frequency

oscillations are attenuated. In Craven [14, 1995], if this attenuation is suffi-

Optimal Control Models 19

ciently rapid, the result of step-function approximations converges to the exact

optimum while It is necessary to have some assumption of this

qualitative kind in order to ensure that the chosen finite dimensional approxi-

mations will permit a good approximation to the exact optimum. The RIOTS

95 package can run faster than SCOM, maybe because of its implementation

in the C programming language. The efficiency of the STC method has been

demonstrated by comparisons with results through MISER3 optimal control

software (a general-purpose optimal control software package incorporating

sophisticated numerical algorithms). MISER3 did not get results as fast as

the STC method did, perhaps because the general-purpose might be hamper-

ing its agility to certain extent because of some default settings regarding the

tolerances for the accuracy of the ODE solver and optimization routine in the

software.

This research is only concerned with pure bang-bang control problems within

a fixed-time period. All the algorithms and transformations are made for this

purpose. The control function is also approximated by a step-function. How-

ever, because the control does not always jump at the grid-points of the subdi-

visions of the time intervals which are usually equally divided in other works,

it is necessary to calculate the optimal divisions of the time horizon. This

research is mainly computing the optimal ranges of the subdivisions in time

period as well as calculating the minimum of the objective function. Situations

when a cost of changing control is involved in the cost function are discussed

as well as how this cost can effectively work on the whole system. Although

the STC method is also concerned with the calculation of the optimal switching

times, it does not include the cost of each switching control.

The limitations of the above computational approaches are summarized in

Chen and Craven [10, 2002]. From the above survey it will also appear that

each of the above computational methods has characteristics which are compu-

tationally efficient for computing optimal control financial models with switch-

ing times. A new approach which can adapt various convenient components

of the above computational approaches is developed in the next section. Al-

though the present algorithm has similarity with CPET, the details of the two

algorithms are different. A new computer package called CSTVA is also devel-

oped here which can suitably implement the proposed algorithm. The present

computational method consisting the STV algorithm and the CSTVA computer

programs does, therefore, provide a new computational approach for modeling

optimal corporate financing. The computational approach can be suitably ap-

plied to any other disciplines as well.

This approach (STV) consists of several computational methods (described

in Chapter 2):

1 The STV method where the switching time is made a control variable opti-

mal value of which is to be determined by the model.

20 OPTIMAL CONTROL MODELS IN FINANCE

2 A piecewise-linear (or non-linear) transformation of time.

3 The step function approach to approximate the control variable.

4 Finite difference method for estimating gradients if gradients are not pro-

vided.

5 An optimization program based on the sequential quadratic programming

(SQP) (as in MATLAB™s “constr” program similar to the Newton Method

for constrained optimization).

6 A second order differential equation to represent the dynamic model.

8. Conclusion

This book is mainly concerned with using the computational algorithms to

solve certain classes of optimal control problems. The next chapter will in-

troduce the computational approach named Switching Time Variable (STV)

algorithm developed in this book that can solve a class of financial optimal

control problems when the control is approximated by a step-function. The

piecewise-linear transformation that is constructed for the computer software

package is described in Section 2.2. Some non-linear transformations, which

were first introduced by Craven [14, 1995] , are also discussed in Section 2.3.

These transformations are used to solve the large time period optimal control

problem in Chapter 4. A computer software package that was developed by

Craven and Islam [18, 2001] and Islam and Craven [37, 2001] is presented in

Section 2.4. The “nqq” function for solving differential equations is quoted

as a part of the computer software package in this research. The thrust of this

book involves a general computer software for certain optimal control prob-

lems. The principal algorithms behind it are introduced in Section 2.6. All the

computing results of an example problem for optimal investment planning for

the economy are shown in graphs and tables in Section 2.7. A cost of changing

control is also discussed in Section 2.8.

Chapter 2

THE STV APPROACH TO

FINANCIAL OPTIMAL CONTROL MODELS

1. Introduction

In this chapter, a particular case of the optimal financial control problems,

which has one state function and one control function that is approximated by

Copyright Design by: Sunlight webdesign