Skip to main content

Hedging and the need for speed

21 June 2012

 

Milliman has consistently pushed the limits of computing power in the quest to give clients more useful and actionable results. For example, more than a decade ago, we developed MG-Hedge®, a platform for helping life insurance companies manage capital markets risk embedded within their products, particularly hedging of variable annuity (VA) risks. VAs have extremely complex, long-term financial options embedded within them. A typical VA block of a large life insurer may have over a million contracts written to policyholders, each with their own distinct properties. In addition, the complexity of the underlying options requires valuation approaches involving Monte Carlo simulation of many variables over long time horizons. Because of the volatile nature of the capital markets and the dynamic quality of the risks in the product, the valuation models had to be run every day. The volume of computational power used to drive hedging programs was unprecedented in the life insurance industry, and Milliman’s systems led the way.

A key component of Milliman’s success came from having multidisciplinary teams of actuaries, quantitative modelers, and technology developers attack the problem. Working together to leverage their varying backgrounds and expertise led to the development of what has become one of the world’s most-used risk management tools. In addition to human initiative and creativity, MG-Hedge utilizes advanced grid computing systems harnessing hundreds or thousands of central processing units (CPUs).

In the years since the creation of MG-Hedge, technology has continued to advance. Milliman has exploited those advancements to speed the hedging calculations and produce a richer, more robust set of information, both for our clients (to help them manage their hedge programs and financial risks) and for our own internal teams that manage and execute hedging programs (on behalf of more than 30 VA blocks totaling over $75 Billion in AUM). The advent of general purpose graphics processing unit (GPGPU) computing offers a compelling opportunity to go even further.

Enter the GPU

The technology that is beginning to transform the world of high performance computing—including actuarial modeling—comes from an unlikely place: the world of computer graphics and video games. Graphics processing units (GPUs) were developed to accelerate the rendering of computer graphics. CPUs, the chips that are the “brains” of everything from smartphones to PCs to web servers, are good at performing complex calculations in serial fashion, one after another. GPUs, on the other hand, are designed to do large numbers of relatively simple calculations in parallel—that is, many at the same time (Figure 1).

Massively parallel computing of the kind for which GPUs are built has the potential to transform many fields of endeavor, from petroleum extraction to biomedicine. GPUs are less expensive and more energy efficient than CPUs and can, under favorable circumstances, outperform them by one to two orders of magnitude. The reason that GPUs have not already become commonplace in high performance computing is that they were not originally designed for the purpose. General purpose computing on GPUs has evolved over a period of decades.

Early graphics processors were designed to handle only specific aspects of graphics processing. NVIDIA released the first fully programmable GPU in 1999. Scientific and medical researchers began to experiment with exploiting the parallel processing capabilities of GPUs. However, using GPUs required translating a problem into a graphical domain and using specialized graphics languages such as OpenGL, both of which proved impractical for general use.

In 2003, a research team developed a programming model called Brook that allowed the use of C, a high-level programming language, to control GPUs. NVIDIA built on the work of that team to create CUDA, released in 2006, which is a complete system for GPGPU computing including a compiler, math libraries, and debugging tools.

Milliman takes the GPU plunge

Although Milliman had explored GPU-based computing for several years, it was not until 2009 that the company decided the technology was mature enough to justify a pilot. This proof-of-concept project produced impressive results, with relatively simple models running up to 200 times faster on GPUs. Subsequent work with realistic and complex models have produced improvements of better than 50 times faster, and in some cases, performance can still be improved by more than two orders of magnitude. Additionally, continuing rapid improvements in GPU hardware are likely to drive those numbers up over time. Milliman is implementing GPU-based solutions for several clients, while continuing research and development efforts to explore how best to leverage GPGPU for its clients.

The technical challenges are diminishing as GPU development tools become more sophisticated and as solution architects grow to understand how the GPU platform works. The most significant question going forward is not whether to leverage GPGPU computing, but how to most effectively use this huge increase in computational power. The most obvious benefits are that GPUs cost less than CPUs and are more power-efficient. Reducing processing time also alleviates operational pressure in the face of ever-shrinking deadlines. The real power of GPGPU computing, however, is in something other than the ability to do the same calculations 100 times faster. The Milliman team is focused on developing a variety of approaches for using the “computational gift” of GPGPU computing strategically. Alongside technical development, Milliman is investing considerable resources toward developing methodologies that make the most of the technology to produce better, timelier, and more actionable information.

In the case of hedging, companies today generally use a relatively limited number of stresses—from dozens to hundreds—and calculations that drive how they estimate asset and liability mismatches during a given day. With the additional computational power of GPGPU systems, the precision and richness of liability estimation can be increased by one to two orders of magnitude, which will result in improved hedging performance, especially in volatile market environments. And, instead of running hedging analyses overnight, they could be run in a matter of minutes, allowing traders to course correct multiple times in a single day. From a strategic perspective, precise hedging simulations will help companies make better decisions about which risks to hedge or not hedge.

Beyond hedging, GPGPU computing will empower companies with better tools for forecasting future earnings, reserves, and capital requirements for planning and capital budgeting. Some of the most promising possibilities include:

  • Real-time analytics:Real-time hedging analytics will enable decision making based on the latest market information.
  • Strategy development:Developing and running simulations that compare hedging strategy performance over time and under varying economic scenarios is very computationally intensive. GPUs will allow hedging strategies to be modeled more effectively and will allow company management to make strategic decisions using a better and richer set of information.
  • Balance sheet projections:Calculating balance sheet items at a single point in time is complex. Projecting them forward and across scenarios is orders of magnitude more computationally intensive. Yet, the information that comes from being able to project balance sheets is critically important to the financial management of company and GPU technology enables this function.
  • Fund management and protection:Milliman is at the forefront in the development of protection strategies, which embed hedging and risk management inside of the funds offered to investors and policyholders. The strategies require significant computing power to execute. GPGPU computing will help companies understand fund performance in different market environments as well as the implications for risk management for guarantee products based on those funds.

Watch for curves

Although GPGPU computing is getting easier all the time, there are still several pitfalls to avoid. First, GPU code can be very sensitive to seemingly inconsequential changes. Adding a single variable can significantly reduce the number of processing threads capable of running concurrently. Even the order in which variables are accessed can have a measurable impact on performance.

While CUDA has made GPUs accessible using commonly known languages, it is not simply a matter of recompiling existing code to work with CUDA. The benefits of GPU lie in parallel processing, so algorithms have to be rethought and recoded accordingly. CPUs still outperform GPUs for serial processing tasks. GPUs are not likely to replace CPUs. Most systems use both in tandem, offloading tasks that can benefit from parallelism to the GPU.

Conclusion

Using increased computational power effectively requires a strategic approach and a commitment of resources. It is important to look beyond cost reductions, as getting the most from GPUs actually requires significant investment. Effective use requires a disciplined approach to learning about the strengths and weaknesses of GPGPU computing at the level on which models are constructed. Organizations that simply want to save money by using GPUs will find rapidly diminishing returns. Milliman is making investment of time, resources, and capital toward market-ready solutions leveraging the power of GPUs to help companies improve their decision making and performance in managing financial risks.


Explore more tags from this article

We’re here to help