The Lagrange programming neural network (LPNN) was proposed by Shengwei Zhang and A. G. Constantinides [1] from their research concerning analog computational circuits. An analog computational circuit is usually constructed by a dense interconnection of simple analog computational elements (neurons) and governed by a set of differential equations [2,3]. By using an analog computational circuit, the optimization problem can be solved either by iterations using a digital computer or by setting up the associated neural circuit and measuring the node voltage after it settles down to a steady state.

The Lagrange programming neural network is designed for general nonlinear programming. It is based on the well-known Lagrange multiplier method [2] for constrained programming. Instead of following a direct descent approach of the penalty function, the network looks for, if possible, a point satisfying the first-order necessary conditions of optimality in the state space. Consider the following nonlinear programming problem with equality constraints:

are given functions and. The components of h are denoted  f and h are assumed to be twice continuously differentiable.
The Lagrange function is defined by

where is referred to as the Lagrange multiplier. Furthermore, we have

The first-order necessary condition of optimality can be expressed as a stationary point of over x and , i.e.
The transient behavior of the neural network is defined by the following equations.

If the network is physically stable, the equilibrium point  described by;
and obviously meets the first-order necessary condition of optimality and thus provides a Lagrange solution. There are two classes of neurons in the network, variable neurons x and Lagrangian neurons, with regard to their contribution in searching for an optimal solution. Variable neurons seek for a minimum point of the cost function and provide the solution at an equilibrium point, while Lagrangian neurons lead the dynamic trajectory into the feasible region determined by the constraints.

The disadvantage of the Lagrange neural network lies in that it handles equality constraints only. Though in theory inequality constraints can be converted to equality constraints by introducing slack variables, the dimension of the neural network will inevitably increase, which is usually not deemed optimal in terms of model complexity. In this sense, we need a neural network which can be regarded as an extension of the Lagrange network.

The Augmented Lagrange Programming Neural Network
Another variation of the LPNN is the augmented Lagrangian programming neural network(ALPNN) which we now introduce
Consider the more general problem with inequality constraint this time.

Thus  and . The functions  are assumed to be twice continuously differentiable.

[1] Zhang, Shengwei and Constantinides, A. G., "Lagrange Programming
Neural Networks," IEEE Trans. Circuits Syst, vol. 39, no. 7, pp. 441-
452, July 1992.
[2] Chua, L. O. and Lin, G. N., "Nonlinear Programming Without
Computation," IEEE Trans. Circuits Syst., vol. CAS-31, pp. 182-188,
Feb. 1984.
[3 ] Dennis, J. B., Mathematical Programming and Electrical Networks
Wiley, New York, 1959.


Share on Google Plus


The publications and/or documents on this website are provided for general information purposes only. Your use of any of these sample documents is subjected to your own decision NB: Join our Social Media Network on Google Plus | Facebook | Twitter | Linkedin