**Solving Linear Programs**

Problem Description
Assume a problem where you want to:

Maximize the function

Subject to:

Solution
First, define the objective function:

Then define a set containing all the constraint equations:

For a two-dimensional linear program, you can easily graph the feasible region, shown in blue below. The black lines indicate the constraints, and the red lines indicate the contours of the objective function .

Using the Maximize command, find the solution that maximizes the objective function subject to the constraint equations. The first entry returned, 10.5, is the objective value at the optimal solution, while the second entry refers to a point, (4, 6.5), which is a solution point to the optimization problem. The point (4, 6.5) lies at the upper-right vertex of the feasible region.

This problem can also be solved using the interactive Optimization Assistant.

**Solving Non-Linear Programs**

Problem Description
Find the point on the circle formed by the intersection of the unit sphere with the plane that is closest to the point (1, 2, 3).

Solution
This problem can be easily conceptualized by plotting the unit sphere and the equation defining the intersecting plane.

The objective function is the squared distance between a point (x, y, z) and the point (1, 2, 3).

The point (x, y, z) is constrained to lie on both the unit sphere and the given plane. Thus, the constraints are:

You can minimize the objective function subject to the constraints using the NLPSolve command.

Thus, the minimum distance is 10.29, and the closest point is (-0.51, 0.17, 0.84).

**Solving Quadratic Programs**

Problem Description
The Markowitz model is an optimization model for balancing the return and risk of a portfolio. The decision variables are the amounts invested in each asset. The objective is to minimize the variance of the portfolio's total return, subject to the following constraints: (1) the expected growth of the portfolio is at least some target level, and (2) the investment should not be more than the capital.

Solution
Let:

be the amounts you buy

the amount of capital you have

the random vector of asset returns over some period

the expected value of

the minimum growth you hope to obtain

the covariance matrix of

The objective function is , which can be shown to be equal to .

If, for example, , you would try to:

Minimize the function

Subject to:

Suppose you have the following data.

This is a quadratic function of X, and so the quadratic programming can be used to solve the problem.

Thus, the minimum variance portfolio that earns an expected return of at least 10% is . Asset 2 gets nothing because its expected return is -20% and its covariance with the other assets is not sufficiently negative for it to bring any diversification benefits.

**Solving Least-Square Problems**

Problem Description
Neural networks are machine-learning programs that can learn to approximate functions using a set of sample function values called training data. To generate an output, a neural network applies a smooth function (typically, sigmoidal or hyperbolic tangent) to a linear weighting of the input data. To train the network, one assigns these weights so as to minimize the sum of squared differences between the desired and generated outputs over all data points in the training set. This is an optimization problem that can often be solved by least-squares techniques.

Solution
Assume that you have a sample training data set with five points. The first element of each point is the input, the second is the desired output.

Suppose that for a given input value x, the network outputs , where and are the weights that are to be set. The residuals are the differences between these outputs and the desired outputs given in the training set:

You can extract elements of the list x; using the selection operation. Then, use seq to create a sequence.

The objective function is the sum of squares of the residuals:

Now, solve the optimization problem with the LSSolve command. Note that you only have to pass Maple the list of residuals.

The weights needed to minimize the sum of squared differences between the desired and generated outputs for and are and .