All New Wilmott Jobs Board                     (r)

Forum Navigation:

magazine

FORUMS > Programming and Software Forum < refresh >
Topic Title: MATLAB lsqnonlin function
Created On Thu Mar 04, 10 02:05 AM
Topic View:

View thread in raw text format


loooooo
Member

Posts: 75
Joined: Oct 2009

Thu Mar 04, 10 02:05 AM
User is offline View users profile

I've got a question as to minimising the sum of squared errors across "i" and "j" both, hence concerns with two summation operators, by MATLAB. I know that lsqnonlin performs single-summation minimisation problem as in the code I've posted below, but does the same thing apply to double-summation minimisation problem or should I better use fminsearch?

This code I posted does daily calibration across options on the same day, but what I want to do is aggregate calibration both across options on the same day and over a period of days, as is done in Bakshi, Cao, and Chen (1997): http://www.rhsmith.umd.edu/faculty/gbakshi/jf97b.pdf (in the author's homepage)

What is advantage of using lsqnonlin over fminsearch? I have heard and read lots about them on the web, but can't get it well.


Thanks,
Jason

P.S. the code is easily found on the web, and the title of the paper that contains it in the appendix is "The Heston Model - A Practical Approach with Matlab Code" by Moodley (2005).

global OptionData;
global NoOfOptions;
global NoOfIterations;
global PriceDifference;

NoOfIterations = 0;
load OptionData.m ;

%OptionData = [r,T,S0,K,Option Value,bid,offer]

Size = size(OptionData);
NoOfOptions = Size(1);

%input sequence in initial vectors [2*kappa*theta - sigma^2,...
% theta,sigma,rho,v0]
x0 = [6.5482 0.0731 2.3012 -0.4176 0.1838];
lb = [0 0 0 -1 0];
ub = [20 1 5 0 1];

options = optimset('MaxFunEvals',20000);
%sets the max no. of iteration to 20000 so that termination
%doesn't take place early.

tic;
Calibration = lsqnonlin(@HestonDifferences,x0,lb,ub);
toc;

Solution = [(Calibration(1)+Calibration(3)^2)/ ...
(2*Calibration(2)), Calibration(2:5)];

function ret = HestonDifferences(input)

global NoOfOptions;
global OptionData;
global NoOfIterations;
global PriceDifference;

NoOfIterations = NoOfIterations + 1;
%counts the no of iterations run to calibrate model

for i = 1:NoOfOptions
PriceDifference(i) = (OptionData(i,5)-HestonCallQuad( ...
(input(1)+input(3)^2)/(2*input(2)),input(2), ...
input(3),input(4),input(5), ...
OptionData(i,1),OptionData(i,2),OptionData(i,3), ...
OptionData(i,4)))/sqrt((abs(OptionData(i,6)- ...
OptionData(i,7))));

%input matrix = [kappa theta sigma rho v0]

end

ret = PriceDifference';

 
Reply
   
Quote
   
Top
   
Bottom
     



willsmith
Senior Member

Posts: 281
Joined: Jan 2008

Fri Mar 05, 10 05:23 AM
User is offline View users profile

Fminsearch doesn't take boundaries, I believe, you'd have to use fmincon, as I see you are passing in boundary conditions. Fminbnd also takes boundaries, but only works on single parameter functions. I too find this all highly confusing, there should be some kind of optimisation homepage in the Matlab help. I've found optimisation tends to be faster when you can supply reasonable boundaries.

To compare the different methods, bring up their help and look at 'limitations'.

For example, fmincon assumes continous functions and continuous first derivatives, but it uses a gradient method to speed up the search.

If you really want to experiment with optimsation methods in high dimensions, I've had good results with the 'patternsearch' function in the 'Genetic Algorithm and Direct Search Toolbox' (not part of standard Matlab), which solved a 36-parameter optimisation for me recently. I tried the genetic algorithm, thinking they were good at high-dimensional problems, but it was terrible, despite lots of tweaks.


-------------------------
PhD (Commodities, Helyette Geman). Looking for commodities research/structuring role. www.linkedin.com/in/willsmithorg
 
Reply
   
Quote
   
Top
   
Bottom
     



Hansi
Senior Member

Posts: 3104
Joined: Jan 2010

Fri Mar 05, 10 10:29 AM
User is offline

I have a good set of slides and projects on Matlab optimization, including explanations of all the possible optimization functions, PM me if you want a copy.
 
Reply
   
Quote
   
Top
   
Bottom
     



loooooo
Member

Posts: 75
Joined: Oct 2009

Sat Mar 20, 10 04:34 AM
User is offline View users profile

Hi Hansi, since I can't PM you at the moment for some restriction on viewing your profile, I am posting note here.
Thanks a lot for it, and could you send it through to loooooo.jason@gmail.com?

Much appreciated,
Jason

Edited: Sat Mar 20, 10 at 05:04 AM by loooooo
 
Reply
   
Quote
   
Top
   
Bottom
     



loooooo
Member

Posts: 75
Joined: Oct 2009

Sat Mar 20, 10 05:04 AM
User is offline View users profile

Hi William,

Actually, my model is more complicated than what was posted earlier, but using pretty much the same optimisation procedure at the moment, which returned some errors. My model requires daily calibrated parameter estimates for an additional input while the others stay the same estimated for the whole sample; hence my input is a 2-by-M (M is, say, the number observations of the sample) matrix - some of them are not used though.

I haven't used FMINCON not to mention the non-standard 'patternsearch' function, which I might try if it reaches convergence faster than anything. Yet, what do you think is the advantage of FMINCON over LSQNONLIN? Do you reckon I'd get a similar result faster by using it instead of LSQNONLIN?

I double-checked that my valuation function indeed pops out a value when arbitrary input is passed in meaning the function set-up may not be wrong, and the LSQNONLIN function also executes a similar function (I got a function for each of the two products in my research) properly arriving at convergence although the output was somewhat out of the upper/lower bounds. Telling from the error saying it returns NaN or Inf values, however, I'm just suspecting that LSQNONLIN may be unable to find a global minimum. Plus, the resulting values out of the bounds make me wonder if FMINCON would yield a better result. OR maybe my model is significantly wrong but having checked up a number of times I can't spot an error as to that so far.

If you got any comments on this, more than welcome, and thanks for the earlier suggestion on patternsearch. I will have a look on it.

Thanks heaps William,
Jason

P.S. William, forgot to ask the last one. In the code posted below in the original post, can patternsearch replace lsqnonlin without having to modify the function? In particular, the objective function to minimise is stacked in a column vector, and I know that it accepts a vector input.


Edited: Sat Mar 20, 10 at 08:51 AM by loooooo
 
Reply
   
Quote
   
Top
   
Bottom
     



Hansi
Senior Member

Posts: 3104
Joined: Jan 2010

Sat Mar 20, 10 12:04 PM
User is offline

Quote

Originally posted by: loooooo
Hi Hansi, since I can't PM you at the moment for some restriction on viewing your profile, I am posting note here.
Thanks a lot for it, and could you send it through to loooooo.jason@gmail.com?

Much appreciated,
Jason


Sent you the notes.

Quote

Originally posted by: loooooo
Yet, what do you think is the advantage of FMINCON over LSQNONLIN? Do you reckon I'd get a similar result faster by using it instead of LSQNONLIN?


In my experience fmincon is most likely going to get you the fastest convergence of the functions available.
 
Reply
   
Quote
   
Top
   
Bottom
     



loooooo
Member

Posts: 75
Joined: Oct 2009

Sun Mar 21, 10 04:07 AM
User is offline View users profile

Thanks heaps, Hans.
I got it last night.

Jason

 
Reply
   
Quote
   
Top
   
Bottom
     

View thread in raw text format
FORUMS > Programming and Software Forum < refresh >

Forum Navigation:

© All material, including contents and design, copyright Wilmott Electronic Media Limited - FuseTalk 4.01 © 1999-2014 FuseTalk Inc. Terms & Conditions