Application Center - Maplesoft

App Preview:

Steepest ascent method for multivariate optimization

You can switch back to the summary page by clicking here.

Learn about Maple
Download Application


 

steepestDescent.mws

Steepest Ascent Method for Optimization

by Dr. William P. Fox (wfox@fmarion.edu) and Dr. William H. Richardson (wrichardson@fmarion.edu)
Department of Mathematics
Francis Marion University, Florence, SC 29501

This worksheet solves nonlinear optimization problems by the method of steepest ascent. Given a function f(x,y) and a current point (x0,y0), the search direction is taken to be the gradient of f(x,y) at (x0,y0). The step length is computed by line search, i.e. as the step length that maximizes f(x,y) along the gradient direction.

Algorithm for Gradient Method of Steepest Ascent

Step 1 . Enter the function to be maximized f, the maximum number of iterations allowed, n, starting point (x,y), and tolerance, t.

Step 2 . Choose lambda to maximize,

f(L(k)) = f((x(k), y(k))+lambda*Grad(f(x(k),y(k))))...

Step 3 . Move to next point,

(x(k+1),y(k+1))= (x(k), y(k))+lambda*Grad(f(x(k),y(k)))

Step 4 . If || Grad[ f(x(k),y(k)]||>t , return to Step 2 . Otherwise, we can terminate. We may also terminate if the number of points found by the iterations is greater than the number allowed in Step 1 , n.

The algorithm inputs are:

n = maximum number of iterations

tol = tolerance to stop when magnitude of gradient is less than tolerance

ptx1, ptx2 are the initial (x0,y0) point

f is the multivariable function to be maximized

OUTPUTs are the approximate optimal point, (x,y) , and the functional value at that point, f(x,y) .

If you need to minimize a function, multiply f by -1 and use this algorithm.

Initializations

> restart:

> with(linalg):

Gradient Search Program Code

> UP:=proc(n,tol,ptx1,ptx2,f)

> local numIter,numfeval,numgeval,f1,p1,p2,rv,x1pt,x2pt,temp,max,mg,v1,v2,newt,lam,nv1,nv2,Fvalue;

> f1:=f;temp:=grad(f1,vector([x1,x2]));

> p1 := unapply(temp[1],x1,x2);p2 := unapply(temp[2],x1,x2);

> x1pt:=ptx1;

> x2pt:=ptx2;

> rv:=vector([p1(x1pt,x2pt),p2(x1pt,x2pt)]):

> numIter:=1;numgeval:=1;numfeval:=1;

> printf("\n\n-----------------------------------------");

> printf("---------------------------------------------");

> printf("\n\n Initial Condition: ");

> printf("(%8.4f,%8.4f)\n\n",x1pt,x2pt);

> printf(" Iter Gradient Vector G magnitude G");

> printf(" x[k] Step Length\n\n");

> label_7;

> rv:=vector([p1(x1pt,x2pt),p2(x1pt,x2pt)]):

> numgeval:=numgeval+1;

> printf("%5d (%8.4f,%8.4f)",numIter,rv[1],rv[2]);

> max:=n;

> mg:=convert(sqrt(dotprod(rv,rv)),float);

> printf("%12.4f",mg);

> if(mg<tol or numIter>=max) then

> goto(label_6);

> else

> numIter:=numIter+1;

> fi;

> v1:=x1pt+t*rv[1];

> v2:=x2pt+t*rv[2];

> newt:=evalf(subs({x1=v1,x2=v2},f1));

> numfeval:=numfeval+1;

> lam:=fsolve(diff(newt,t)=0,t,maxsols=1);

> nv1:=evalf(subs({t=lam},v1));

> nv2:=evalf(subs({t=lam},v2));

> printf(" (%8.4f,%8.4f)%13.4f\n",x1pt,x2pt,lam);

> x1pt:=nv1;

> x2pt:=nv2;

> goto(label_7);

> label_6;

> printf("\n\n-----------------------------------------");

> printf("---------------------------------------------");

> printf("\n\n Approximate Solution: ");

> printf(" (%8.4f,%8.4f)\n",x1pt,x2pt);

> Fvalue:=evalf(subs(x1=x1pt,x2=x2pt,f));

> printf(" Maximum Functional Value: ");

> printf("%21.4f",Fvalue);

> printf("\n Number gradient evaluations:");

> printf("%22d",numgeval);

> printf("\n Number function evaluations:");

> printf("%22d",numfeval);

> printf("\n\n-----------------------------------------");

> printf("---------------------------------------------");

> end:

Examples

> f:=-(x1-2)^4-(x1-2*x2)^2;

f := -(x1-2)^4-(x1-2*x2)^2

> UP(50,.05,0.0,3.0,f);

--------------------------------------------------------------------------------------

                         Initial Condition: (  0.0000,  3.0000)

  Iter    Gradient Vector G      magnitude G            x[k]           Step Length

    1    ( 44.0000,-24.0000)     50.1199        (  0.0000,  3.0000)        .0615

    2    (  -.7392, -1.3552)      1.5437        (  2.7075,  1.5232)        .2308

    3    (  -.8514,   .4644)       .9699        (  2.5369,  1.2104)        .1116

    4    (  -.1801,  -.3302)       .3762        (  2.4419,  1.2622)        .2672

    5    (  -.3359,   .1832)       .3826        (  2.3938,  1.1740)        .1246

    6    (  -.0910,  -.1668)       .1900        (  2.3519,  1.1968)        .2795

    7    (  -.1915,   .1044)       .2181        (  2.3265,  1.1502)        .1307

    8    (  -.0572,  -.1049)       .1194        (  2.3015,  1.1639)        .2859

    9    (  -.1275,   .0696)       .1453        (  2.2851,  1.1339)        .1343

   10    (  -.0402,  -.0737)       .0839        (  2.2680,  1.1432)        .2898

   11    (  -.0927,   .0506)       .1056        (  2.2564,  1.1219)        .1367

   12    (  -.0302,  -.0554)       .0631        (  2.2437,  1.1288)        .2925

   13    (  -.0713,   .0389)       .0812        (  2.2349,  1.1126)        .1384

   14    (  -.0238,  -.0436)       .0497

--------------------------------------------------------------------------------------

                         Approximate Solution:   (  2.2250,  1.1180)

                     Maximum Functional Value:                -.0027

                  Number gradient evaluations:                    15

                  Number function evaluations:                    14

--------------------------------------------------------------------------------------

> f:=12*x2-4*x1^2-4*x2^2+4*x1*x2;

f := 12*x2-4*x1^2-4*x2^2+4*x1*x2

> UP(50,.05,-0.5,1.0,f);

--------------------------------------------------------------------------------------

                         Initial Condition: (  -.5000,  1.0000)

  Iter    Gradient Vector G      magnitude G            x[k]           Step Length

    1    (  8.0000,  2.0000)      8.2462        (  -.5000,  1.0000)        .1635

    2    ( -1.1538,  4.6154)      4.7574        (   .8077,  1.3269)        .1012

    3    (  1.6484,   .4121)      1.6991        (   .6909,  1.7940)        .1635

    4    (  -.2377,   .9510)       .9802        (   .9604,  1.8613)        .1012

    5    (   .3396,   .0849)       .3501        (   .9363,  1.9575)        .1635

    6    (  -.0490,   .1959)       .2020        (   .9918,  1.9714)        .1012

    7    (   .0700,   .0175)       .0721        (   .9869,  1.9913)        .1635

    8    (  -.0101,   .0404)       .0416

--------------------------------------------------------------------------------------

                         Approximate Solution:   (   .9983,  1.9941)

                     Maximum Functional Value:               11.9999

                  Number gradient evaluations:                     9

                  Number function evaluations:                     8

--------------------------------------------------------------------------------------