Daniel J. Duffy's Blog
http://www.wilmott.com/blogs/cuchulainn/index.cfm
en-usThu, 17 Apr 2014 13:52:44 --0100Sat, 07 Dec 2013 20:48:00 --0100BlogCFChttp://blogs.law.harvard.edu/tech/rssblogs@wilmott.comblogs@wilmott.comNelson Mandela's Welcome to the city of Glasgow
http://www.wilmott.com/blogs/cuchulainn/index.cfm/2013/12/7/Nelson-Mandelas-Welcome-to-the-city-of-Glasgow
http://www.youtube.com/watch?v=oqwZrVtZva8
http://www.youtube.com/watch?v=XkXxi5OgNzo
LifeSat, 07 Dec 2013 20:48:00 --0100http://www.wilmott.com/blogs/cuchulainn/index.cfm/2013/12/7/Nelson-Mandelas-Welcome-to-the-city-of-GlasgowC# in Financial Markets What are the Advantages, the Trading Perspective?
http://www.wilmott.com/blogs/cuchulainn/index.cfm/2013/3/15/C-in-Financial-Markets-What-are-the-Advantages-the-Trading-Perspective
C# in Financial Markets What are the Advantages, the Trading Perspective?
Andrea Germani, Daniel J. Duffy
March 2013
In this blog the authors discuss some of the advantages of C# in computational finance from the perspective of the application developer and trader. As a language, it offers the user-friendliness of VBA and Excel integration on the one hand and the efficiency and object-oriented features of C++ on the other hand. This ? together with .NET libraries ? suggest that developer productivity levels will be high and that you can write efficient and robust code. These conclusions have been borne out in practical applications.
We now discuss some of the features in C# and in the .NET Framework that support our claims. To this end, we base our discussion on the ISO 9126 standard that is a classification consisting of orthogonal characteristics that describe the quality of software products. In this case we discuss how these characteristics are realised in C#.
. Functionality: this is the capability to satisfy user needs.
The .NET Framework is suitable for a wide range of applications due to its support for OOP and generic programming models. The .NET libraries have native support for containers, data classes for database (ADO.NET) and XML interoperability as well as LINQ (Language Integrated Query) that allows the developer to perform SQL-like queries on in-memory data structures. The framework is secure because all memory management is taken care of by the garbage collector. This last feature has the advantage that the developer does not have to worry about memory leaks, dangling pointers and other scary things.
. Interoperability: this subcharacteristic of Functionality is concerned with the ability of a system to interact with other (predefined) software.
The .NET Framework supports a multi-language programming model. This means that you can use a number of languages (such as C#, VB.NET and F#) in a given program and compile them into a single executable file. It is also very easy to create Excel worksheet and COM add ins in C#. Finally, interoperability with native C++ code is also possible in combination with Microsoft?s Managed C++ compiled. In other words, we can wrap existing C++ legacy code in the .NET development environment. In one sense, we can say that C# is a one-stop shop.
. Reliability: this is the capability that a system maintains a given level of performance over a given period of time.
In the current case we say that .NET uses managed code which entails that access to resources is organised by the Common Language Runtime (CLR). In particular, the CLR has a garbage collector that executes as part of your program, reclaiming memory for objects that are no longer referenced. Furthermore, C# is a type-safe language. This means that instances (objects) of types can interact only through protocols that they define. In other words, there is no danger of comparing apples and oranges as it were.
. Usability: this characteristic refers to the effort that is needed in order to use (as developer) a software system.
C++ and Java developers tend to find C# easy to learn. Typically, you can become competent in the language within weeks. Of course, learning how to use the .NET libraries take some time. The use of Intellisense speeds up the development process.
. Efficiency: this characteristic refers to the level of performance and the amount of resources that are needed in order to achieve the performance.
C# code can be compiled to native code and the just-in-time compiler (JIT) ensures that the code is fast. Furthermore, C# supports multithreading and .NET has the Task Parallel Library (TPL) that can improve the speedup of code that is amenable to parallelisation. The perennial discussion on whether C# is faster than C++ is not the central issue in our opinion.
. Maintainability: refers to the effort that is needed to make specific modifications to code in the system. Once a software system has started to gain a foothold with users you will find that the stability, testability and analysability of the code base become more important. The OOP and GP ? if applied in an appropriate fashion ? promote the maintainability of your code.
Conclusions
We have found that C# and the .NET Framework to be a suitable development environment for finance. They offer tools and libraries that provide the optimal mix of number crunching ability (similar to C and C++), libraries for data processing and Excel interoperability (similar to VBA), resulting in measurable advantages for front office productivity.
Summary of Advantages of C# and the .NET Framework
1) Many native .NET classes, for example containers.
2) Advanced libraries, for example LINQ, multithreading and ADO.NET.
3) Easy to interface to MS Office, in particular Excel.
4) East Web and database connection.
5) No memory management issues.
6) You delegate to MS developers who create and optimise new classes for us.
7) One-stop shop; major advances in developer productivity levels.
8) A suitable development environment for computational finance and trading applications.
We discuss C# and its applications to computational finance in the recently published book
?C# for Financial Markets? by Daniel J. Duffy and Andrea Germani. Published by John Wiley and Sons Chichester 2013.
Web site
http://www.datasimfinancial.com/forum/viewforum.php?f=196&sid=71dedd579d000d4c807a505c0763c6a2
BooksFri, 15 Mar 2013 15:33:00 --0100http://www.wilmott.com/blogs/cuchulainn/index.cfm/2013/3/15/C-in-Financial-Markets-What-are-the-Advantages-the-Trading-PerspectiveThe big 60
http://www.wilmott.com/blogs/cuchulainn/index.cfm/2012/8/29/The-big-60
Today I turn the 60 year corner. How time flies.
My family want me to take up golf, tennis anything except judo but I just can't seem to get around to it.
Here is a photo taken last night!
LifeWed, 29 Aug 2012 11:08:00 --0100http://www.wilmott.com/blogs/cuchulainn/index.cfm/2012/8/29/The-big-60Wiring diagrams for software systems
http://www.wilmott.com/blogs/cuchulainn/index.cfm/2012/3/29/Wiring-diagrams-for-software-systems
This is an attempt at a wiring diagram (UML component diagram) to describe loosely-coupled systems. You customise/extend the application.
Of course, this is only one form of communication.
Feedback welcome.
SoftwareThu, 29 Mar 2012 11:14:00 --0100http://www.wilmott.com/blogs/cuchulainn/index.cfm/2012/3/29/Wiring-diagrams-for-software-systemsSkating in the waterways in Noord Holland
http://www.wilmott.com/blogs/cuchulainn/index.cfm/2012/2/4/Skating-in-the-waterways-in-Noord-Holland
Some years ago I posted Anser Anser (geese)
Now my neighbours are skating in the same fields.
Last night it was MINUS 22 Celsius.
GeneralSat, 04 Feb 2012 23:05:00 --0100http://www.wilmott.com/blogs/cuchulainn/index.cfm/2012/2/4/Skating-in-the-waterways-in-Noord-HollandComputing option price: Uncertain volatility versus Black Scholes
http://www.wilmott.com/blogs/cuchulainn/index.cfm/2010/12/31/ssss
This is an example of UVM models (computed using ADE) against BS for flat vol = 20% and 40%.
GeneralFri, 31 Dec 2010 11:33:00 --0100http://www.wilmott.com/blogs/cuchulainn/index.cfm/2010/12/31/ssssHigher Order PDEs and the Curse of Dimensionality
http://www.wilmott.com/blogs/cuchulainn/index.cfm/2010/3/2/Higher-Order-PDEs-and-the-Curse-of-Dimensionality
Higher Order PDEs and the Curse of Dimensionality
When pricing multi-factor derivatives products the general consensus is that the partial differential equation (PDE) and finite difference (FDM) approaches become difficult to apply and we usually resort to other methods such as Monte Carlo or numerical integration in multiple dimensions. In this blog I will attempt to describe some of the essential problems that we encounter when approximating higher-order PDE (in the current case, two and three dimensions) by popular FD schemes; we claim that the ADE method is easy to implement and can be parallelized for shared memory systems.
The main topics are:
Formulating a PDE in an unambiguous manner
FD schemes for multi-factor PDEs
Implementation in C++, using Boost data structures and multithreaded programming
In principle, writing down the Black Scholes PDE in n dimensions is the easy part. The complications arise when we define the corresponding boundary condition information. First, we need to either truncate the region of interest or alternatively we can transform the original region to the unit cube in n dimensions. This latter technique is general and robust and it can be used instead of, or in combination with domain truncation. Next, the specification of boundary conditions is made easier by the application of the Fichera theory (which can be seen as a generalization of the well-known Feller condition, especially for the near field and even for the far field when using domain transformation.)
Having defined the PDE problem, we now choose to approximate it using finite differences. For multi-dimensional PDEs we see that the ADI method is popular, while in recent years the Splitting method has been used because of its flexibility in solving complex PDEs. Both methods are examples of MOS (multiplicative operator splitting) methods which implies that we solve a complex PDE as a sequence of simpler, one-dimensional PDEs.
The main disadvantages of MOSs are:
. They introduce splitting error; it is possible that you end up with a first-order accurate approximation to the PDE which is not what we want
. It is difficult to parallelise these schemes; another challenge is to parallelise the tridiagonal LU solver that each leg of the schemes uses. This is a major botteleneck.
. The ADI method in combination with the Crank-Nicolson scheme (for time discretisation) and the Craig-Sneyd method (for mixed derivatives) can produce oscillations and inaccuracies in certain situations
. The specification of conforming boundary conditions (especially for solutions at intermediate time steps) can be problematic
The approach that we have been using is based on AOS (additive operator splitting) and it allows us to write a PDE problem as the sum of partial solutions to the PDE in question (an example of an AOS is the Alternating Direction Explicit(ADE) method which I described in a recent SSRN paper). The advantages of AOS are:
. The schemes are uniformly stable, explicit and second-order accurate
. No splitting error is introduced and no artificial boundary conditions need be defined
. The scheme is easily parallelized, for example in combination with the C++ OpenMP library
. When implementing high-dimensional PDEs we use the Boost multi_array library which reduces the cognitive burden on the developer
. The method can be used for problems involving mixed derivatives; in these cases we use the Yanenko variant for approximation these terms.
We conclude with a discussion of the accuracy of the ADE scheme and some remarks on speedup.
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1552926
In the above article we introduce the theory underlying ADE and we use it to price European and American options. We have also generalized the method to two and three factor (basket) options. The method is accurate, easy to program and is amenable to coarse parallelism. To date, we have seen that the speedup with ADE is between 3 and 5 times that experienced with ADI or Splitting methods which are difficult to parallelise.
These are the initial remarks and findings. I will report on these issues in future articles.
Numerical MethodsTue, 02 Mar 2010 13:23:00 --0100http://www.wilmott.com/blogs/cuchulainn/index.cfm/2010/3/2/Higher-Order-PDEs-and-the-Curse-of-DimensionalityWild Geese (anser anser)
http://www.wilmott.com/blogs/cuchulainn/index.cfm/2009/12/20/Wild-Geese
It has been snowing the last few days. When I looked out my window this morning onto the nature reserve I saw these Greylag (?)geese (anser anser).
Just the perfect scenery for the time of year.
GeneralSun, 20 Dec 2009 11:47:00 --0100http://www.wilmott.com/blogs/cuchulainn/index.cfm/2009/12/20/Wild-GeeseStable, second order accurate schemes: part I, Equity pricing
http://www.wilmott.com/blogs/cuchulainn/index.cfm/2009/9/3/Stable-second-order-accurate-schemes-part-I-Equity-pricing
One of the most frequently asked questions with PDE models is how to model boundaries and impose boundary conditions.
Some years ago I stumbled on the ADE method but I needed the elegant Ficher theory in combination with good schemes for convection terms (most books only deal with diffusion and the essential difficulty is when you combine them).
This working paper adresses these issues and we compare our results with other numerical methods for a range of one factor problems, as discussed in the article.
//
http://www.datasimfinancial.com/forum/viewtopic.php?t=289
//
The ADE applied to early exercise seems to be particularly fast and accurate and is kind of modified Brennan Schwartz algorithm.
Thanks very much to all those who provided feedbak and input. Of course, all errors are my own.
I welcome comments and suggestions. A forthcoming paper will deal with other 1-factor and 2-factor models.
Numerical MethodsThu, 03 Sep 2009 11:23:00 --0100http://www.wilmott.com/blogs/cuchulainn/index.cfm/2009/9/3/Stable-second-order-accurate-schemes-part-I-Equity-pricingStatistical Distributions, boost Part III
http://www.wilmott.com/blogs/cuchulainn/index.cfm/2009/4/17/Statistical-Distributions-boost-Part-III
The Boost Math Toolkit contains a number of template classes for a wide range of univariate continuous and discrete probability distributions. We can define a probability distribution by giving its defining parameters and using them in the constructor of the corresponding template class. In general, each distribution has member functions to compute mean and standard deviation while the most extensive functionality is to be found in free (that is, non-member) functions. The library supports the following categories of functions:
. Essential functions (pdf, cdf, cdf complement)
. Measures of central tendency (mean, median, mode, quantile)
. Measures of dispersion (standard deviation, variance)
. Kurtosis, kurtosis excess, hazard functions
The library contains many of the most popular discrete and continuous probability distribution functions that we can use in computational finance. It is worth mentioning that it has now support for the Students t-distribution, Gamma distribution, Chi Squared distribution and the Noncentral Chi Squared distribution.
What are the advantages of this library in our opinion?
. Standardisation (the code has been peer-reviewed and conforms to the boost design standard)
. Quality: the code is efficient, robust and portable. As developer, you use the library without having to be concerned with its maintenance
. Building applications: you can use the classes in the library as part of large software systems
. No more pseudo-code needed: instead of discussing non-runnable code we can use code from Boost.Math directly, thus allowing readers to check the validity of the presented numerical results.
SoftwareFri, 17 Apr 2009 17:25:00 --0100http://www.wilmott.com/blogs/cuchulainn/index.cfm/2009/4/17/Statistical-Distributions-boost-Part-IIIThe boost Library Part II: Linear Algebra and Matrices
http://www.wilmott.com/blogs/cuchulainn/index.cfm/2009/3/21/The-boost-Library-Part-II-Linear-Algebra-and-Matrices
The boost.uBLAS library supports vector and matrix data structures and basic linear operations on these structures. The syntax closely reflects mathematical notation because operator overloading is used to implement these operations. Furthermore, the library uses expression templates to generate efficient code. The library has been influenced by a number of other libraries such as BLAS, Blitz++, POOMA and MTL and the main design goals are:
. Use mathematical notation whenever appropriate
. Efficiency (time and resource management)
. Functionality (provide features that appeal to a wide range of application areas)
. Compatibility (array-like indexing and use of STL allocators for storage allocation)
The two most important data structures represent vectors and matrices. A vector is a one-dimensional structure while a matrix is a two-dimensional structure. We can define various vector and (especially) matrix patterns that describe how their elements are arranged in memory; examples are dense, sparse, banded, triangular, symmetric and Hermitian. These patterned matrices are needed in many kinds of applications and the can be used directly in code without you having to create them yourself. Furthermore, we can apply primitive operations on vectors and matrices:
Addition of vectors and matrices
Scalar multiplication
Computed assignments
Transformations
Norms of vectors and matrices
Inner and outer products
We can use these operations in code and applications. Finally, we can define subvectors and submatrices as well as ranges and slices of vectors and matrices.
Vectors and matrices are fundamental to financial and scientific applications and having a well-developed library such as boost.Tuple with ready-to-use modules will free up developer time. Seeing that matrix algebra consumes much of the effort in an application we expect that the productivity gains will be appreciable in general.
The applications of this library to PDE, Monte Carlo and other problems in computational finance are immediately obvious.See
http://www.datasimfinancial.com/forum/viewtopic.php?t=111
SoftwareSat, 21 Mar 2009 07:14:00 --0100http://www.wilmott.com/blogs/cuchulainn/index.cfm/2009/3/21/The-boost-Library-Part-II-Linear-Algebra-and-MatricesThe boost library and Computational Finance, Part I: Roadmap
http://www.wilmott.com/blogs/cuchulainn/index.cfm/2009/2/22/boost
This is joint work with Dr. Bojan Nikolic. This blog is a summary of what boost 1.38 has to offer. The chances are that what you plan to do (in C++) and spend sleepness hours on has already been done by the boost developers.
Here it comes, brace yourself :)
www.boost.org
//
1. Algorithms
Foreach (avoiding boiler-plate iterators)
BGL (Graph library and graph algorithms)
//
2. Data Types and Data Structures
Any (generic heterogeneous data types)
Tuple (modeling multiple values in one class)
//
3. Data Containers
Array (compile-time one-dimensional arrays)
Bimap (bidirectional maps, many-to-many)
Dynamic bitsets (sets of dynamic bits)
Multi-Array (n-dimensional arrays)
Multi-Index (containers with one or more indices)
Property Map (key/value pairs)
Variant (discriminated union container)
Pointer Container (heap polymorphic objects)
//
4. I/O and Networking
Asio (portable networking, sockets)
Serialization (persistence and marshalling)
//
5. Utilities
Smart Ptr (five smart pointer template classes)
//
6. Mathematics and Numerics
Accumulators (statstical and time series)
Integer (type-safe integers)
Interval (operations on mathematical intervals)
Math (GCD, LCM, complex trigonometric functions)
Statistical distributions (~25 univariates)
Numeric Conversion (policy-based)
Random (a complete system for RNG)
Rational (rational number class)
uBlas (matrix/vector; basic linear algebra routines)
//
7. Higher-Order Programming
Bind (framework for function pointers and functors)
Function (function wrappers/deferred callbacks)
Functional (templates for function object wrappers)
Lambda (small unnamed functions)
Signals (the Observer pattern using signals, slots)
Spirit (LL parser defined as EBNF grammars)
//
8. String and Text Processing
Conversion (polymorphic and lexical casts)
Format (formatting according to a format-string)
Regex (regular expression library)
String Algo (string algorithms library)
Tokenizer (break a string into a series of tokens)
Xpressive (regular expressions and functionality)
//
9. Multithreading and Concurrent/Parallel Programming
Interprocess (shared memory, mutexes, memory maps)
MPI (Message Passing Interface, distributed memory)
Thread (potable C++ multi-threading)
//
10. Template and MetaTemplate Progamming
Function Types (classify, decompose and synthesize)
Fusion (library for working with tuples)
//
11. Miscellaneous
Date Time (generic date-time libraries)
Exception (transporting arbitrary data)
Flyweight (large quantities of redundant objects)
Tribool (3-state Boolean type library)
Units (dimensional analysis)
//
12. Correctness and Testing
Concept Check (tools for generic programming)
Test (simple program testing, unit testing)
SoftwareSun, 22 Feb 2009 13:34:00 --0100http://www.wilmott.com/blogs/cuchulainn/index.cfm/2009/2/22/boostProgramming Teasers in C++ and Matlab
http://www.wilmott.com/blogs/cuchulainn/index.cfm/2008/4/23/Programming-Teasers-in-C-and-Matlab
I came across a set of programming exercises from the 2007 World Intercollegiate Programming Contest. You could program these in Matlab or C++, for example but the usefulness of these questions is that they test a number of skills:
. Analysing a problem (what is the problem)
. Designing (how to solve the problem)
. Matlab/C++ (implement the problem)
So, the process is "what to do, how to do, do" in that order. This approach is also good for the thought process; each activity is independent and feeds into it successor.
At the design level, the crucial element is the design of appropriate data structures and the corresponding algorithms that operate on them.
The choice between OO and generic programming is a non-issue because they both can be used.
I would recommend using STL and boost because they have the tools that we use as the plumbing. Matlab also has many libraries as well.
Testing student knowledge using these kinds of tests is more telling than focusing exclusively on syntax.
The thread on Software and the link to the original ACM questions is:
//
http://www.wilmott.com/messageview.cfm?catid=10&threadid=60837
//
SoftwareWed, 23 Apr 2008 07:39:00 --0100http://www.wilmott.com/blogs/cuchulainn/index.cfm/2008/4/23/Programming-Teasers-in-C-and-MatlabThe Third Man Tour in Vienna 7-7-07, Part II
http://www.wilmott.com/blogs/cuchulainn/index.cfm/2007/9/3/The-Third-Man-Tour-in-Vienna-7707-Part-II
Here's yours truly in the sewer. If you look at the You Tube films of my previous blog you will see what I mean. Unfortunately, I did not have the crew of Ealing studios with me so we had to take the photos wih Mrs. Cuchulainn's camera.
This blog is Part II of 2 (the first blog is also on General)
GeneralMon, 03 Sep 2007 13:37:00 --0100http://www.wilmott.com/blogs/cuchulainn/index.cfm/2007/9/3/The-Third-Man-Tour-in-Vienna-7707-Part-IIThe Third Man Tour in Vienna 7-7-07
http://www.wilmott.com/blogs/cuchulainn/index.cfm/2007/7/11/The-Third-Man-Tour-in-Vienna-7707
Had a few days off at the weekend in between work duties, so I decided to do the Dritte Man tour.
I describe the sites related to this famous film.
Included are:
- the museum
- the tour of the sewer
- the graveyard
- the bus tour
here is a glimpse of the film
http://www.youtube.com/watch?v=enH-A6gD8yw
Unfortunately, Harry did not escape, but I just about managed! Nor did Bernhard Lee escape. I did!
GeneralWed, 11 Jul 2007 21:35:00 --0100http://www.wilmott.com/blogs/cuchulainn/index.cfm/2007/7/11/The-Third-Man-Tour-in-Vienna-7707