Tag Archives: C#

aleagpu_logo_v3_preview

“Auto-magic” GPU Programming

We all agree, GPUs are not as easy to program as a CPU, even though CUDA and OpenCL provide a well established abstraction of the GPU processor design. GPU support in high-level programming languages such as Java, C# or Python still leaves a lot to be desired. Only developers familiar with C++ can directly access a rich set of APIs and tools for general-purpose GPU programming.

We believe this does not need to be so. This blog is a sneak preview of the new upcoming version 3 of Alea GPU which sets completely new standards for GPU development on managed platforms such as .NET or the JVM. So what is so exciting about version 3?

  1. It adds higher level abstractions such as GPU LINQ, a GPU parallel-for and parallel aggregate, which can build and execute GPU workflows or run delegates and lambda expressions on the the GPU. An additional benefit is code reuse – the same delegate can be executed on a GPU as well as on the CPU.

  2. It makes GPU programming much easier and often “auto-magic” – we manage memory allocation and transfer to and from the GPU in an economic way. It is built for developers who grew up without pointers, malloc and free.

  3. We integrate well with the .NET type system, which means for example that .NET arrays can be used directly in GPU kernels.

  4. Of course for all the hard core CUDA programmers with .NET affinity, we still support CUDA in C# and F# as we did already with our previous versions – even better.

Let’s look at the API from a usability point of view and at the underlying technology and implementation details.

GPU LINQ

LINQ is a technology that extends languages such as C# with powerful query capabilities for data access and transformation. It can be extended to support virtually any kind of data store, also data and collections that reside on a GPU. Alea GPU LINQ introduces new LINQ extensions to express GPU computations with LINQ expressions that are optimized and compiled to efficient GPU code. The main advantages of coding whole GPU workflows with LINQ expressions are:

  1. The execution of GPU LINQ workflows is delayed. They are composable , which facilitates code modularity and reuse.

  2. GPU LINQ workflows provide many standard operations such as parallel aggregation or parallel map, which makes them more expressive and reduces boiler plate code.

  3. We can apply various kernel optimization techniques, such as GPU kernel fusing, which results in fewer GPU kernel launches and more compact GPU code.

  4. Having the full code as an expression allows us to better optimize memory management and data transfer.

Here is a simple but interesting example, which determines the largest value of a array of values on the GPU in two steps. First we index the sequence and then we reduce the new array of indexed values with a custom binary operator that just compares the values to find the maximal value and its index. A priori this would require two GPU kernels. The first is a parallel map, the second is a parallel reduction. Alea GPU can fuse the two kernels into one.

A more complex example is the calculation of the fair value of an Asian option with Monte Carlo simulation based on the celebrated Black-Scholes model.

The Monte Carlo simulation runs in multiple batches, each batch consists of numSamplesPerBatch samples. Workflows are composable. The outer workflow is a map-reduce, which launches the batch sampling and reduces the match mean to the mean across all batches. The inner workflow does the actual Monte Carlo simulation. It first binds storage to the workflow which is then populated with normally distributed random numbers. The core algorithm is in the Select. For each sample index i it generates a path and prices the option along the path. The Aggregate method of the inner workflow calculates the batch sample mean with a parallel reduction.

Black Scholes Simulation

GPU Parallel-For and Parallel Aggregate

An alternative abstraction is provided with the GPU parallel-for and parallel aggregate pattern. Together with automatic memory management, they allows us to write parallel GPU code as if you would write serial CPU code. The usability is very simple. We select a GPU device and pass a delegate to the gpu.For method. All the variables used in the delegate are captured in a closure that is then passed to the parallel-for body. The data is automatically moved to the GPU and the results are brought back automatically to the host.

Parallel-For Closure

The element-wise sum on the GPU is now as simple as this:

The delegate accesses data elements arg1 and arg2 that are defined outside of the loop body and writes the result directly to a .NET array. The runtime system takes care of all the memory management and transfer. Because the delegate does not rely on any GPU specific features such as shared memory,the delegate can execute on the CPU and the GPU. The runtime system also takes care of selecting the thread block size based on the occupancy of the generated kernel.

The parallel aggregate works in the same way. It requires a binary associative operator which is used to reduce the input collection to a single value. Our implementation does not require that the operator is commutative. The following code calculates the sum of the array elements on the GPU:

Automatic Memory Management

Our automatic memory management system handles memory allocation and data movement between the different memory spaces of the CPU and the GPU without the programmer having to manage this manually. It is efficient – unnecessary copy operations are avoided by analyzing the memory access. The implementation is based on code instrumentation, a technique that inserts additional instructions into an existing execution path. Alea GPU modifies the CPU code by inserting instructions that monitor array accesses and perform minimum data transfers between the CPU and GPU. As these runtime checks generate a slight performance overhead, the scope of analysis is limited to the code carrying the attribute [GpuManaged]. Leaving out this attribute never means that data will not be copying – it may only affect unnecessary intermediate copying.

To illustrate the automatic memory management in more detail, we look at an example. We iterate 100 times a parallel-for loop that increments the input by one. First of all, we consider the situation without the [GpuManaged] attribute. In this case, the data is automatically copying, although more frequently than necessary due to a limited scope of analysis.

We check the memory copy operations by using the NVIDIA Visual Nsight profiler. As expected the low level CUDA driver functions cuLaunchKernel, cuMemcpyHtoD_v2 and cuMemcpyDtoH_v2 to launch the kernel and to perform memory copy are called 100 times each. This means that the data is copied in and out for each of the 100 sequential parallel for launches. Let us add the attribute [GpuManaged] to turn on automatic memory management.

We see that cuMemcpyHtoD_v2 and cuMemcpyDtoH_v2 are now called just once. The reason is that result data of a preceding GPU parallel-for loop can stay on the GPU for the succeeding parallel-for loop without need of copying the intermediate data back and forth to CPU. Copying is only involved for the input of the first GPU execution as well as for the output of the last GPU computation.

Using .NET Arrays and Value Types in Kernels

For a C# developer it would be very convenient to use .NET arrays and other standard .NET types also directly in a GPU kernel and that all the memory management and data movement is handled automatically. .NET types are either reference types or value types. Value types are types that hold both data and memory at the same location, a reference type has a pointer which points to the memory location. Structs are value types and classes are reference types. Blittable types are types that have a common representation in both managed and unmanaged memory, in particular reference types are always non-blittable. Copying non-blittable types from one memory space to another requires marshalling, which is usually slow.

From the point of view of efficiency we made the decision to only support .NET arrays with blittable element types as well as jagged arrays thereof. This is a good compromise between usability and performance. To illustrate the benefits let’s look at how to write an optimized matrix transpose. With Alea GPU version 2 you have to work with device pointers and all the matrix index calculations have to be done by hand.

Alea GPU version 2 requires that kernels and other GPU resources are in a class that inherits from ILGPUModule. Apart from this the kernel implementation resembles the CUDA C implementation very closely.

With Alea GPU V3 you don’t need to inherit from a base module class anymore. You can directly work with .NET arrays in the kernel, also for the shared memory tile. We save the error prone matrix element index calculations and only need to map the thread block to the matrix tile.

Alea GPU version 2 requires explicit memory allocation, data copying and calling the kernel with device pointers. An additional inconvenience is that matrices stored in two-dimensional arrays first have to be flatten.

Here is the kernel launch code that relies on automatic memory management. The developer allocates a .NET array for the result, passes that, together with the input matrix directly to the kernel.

Without compromizing the usability, the programmer can also work with explicit memory management.

Here the arrays a and at are fake arrays representing arrays on the GPU device and he can use them in a GPU kernel the same way as an ordinary .NET array. The only difference is that he is now responsible to copy back the result explicitly with CopyToHost. Of course the deviceptr<T> API is still available and often useful for low level primitives or to write highly optimized code.

Improved Support for Delegates and Generic Structs

Alea GPU version 3 also has better support for delegates and lambda expressions. Here is a simple generic transform that takes a binary function object as argument and applies it to arrays of input data:

We can launch it with a lambda expression as follows:

The next example defines a struct representing a complex number which becomes a blittable value type.

We define a delegate that adds two complex numbers. It creates the result directly with the default constructor. Note that this delegate is free of any GPU specific code and can be executed on the CPU and GPU alike.

It can be used in the parallel Gpu.For to perform element-wise complex addition

or in the above generic transform kernel.

JIT Compiling Delegates to GPU Code

From an implementation point of view a challenge is that delegates are runtime objects. This means we have to JIT compile the delegate code at runtime. Fortunately our compiler has this feature since its initial version. For a delegate such as

the C# compiler will generate a closure class with fields and an Invoke method:

To instantiate the delegate instance, the C# compiler generates code to instantiate the closure class, set its fields, and to create the delegate instance with the closure instance and the method’s function pointer:

The above code is just illustrative and not legal C# code. Both ldftn and methodof are in fact the
real IL instructions that C# compiler generates.

Whenever the Alea GPU compiler finds this delegate, it translates the closure class into a kernel struct, and JIT compiles the GPU kernel code that comes from the Invoke method of the compiler generated class. Alea GPU caches the result of JIT compilations in a dictionary using the key methodof(CompilerGenerated.Invoke), so it will not compile delegates with same method multiple times.

There is one thing needs to be noted. Since we translate the closure class into a struct and pass it to the GPU as a kernel argument, it is not possible to change the values of the fields. For example a delegate like i => result = arg1 does not work.

Code Instrumentation for Automatic Memory Management

The core component of our automatic memory management system is a memory tracker. It tracks .NET arrays and their counterparts residing in GPU device memory. Every array has a flag that indicates if it is out of date. The tracking of an array starts the first time it is used (implicitly in a closure or explicitly as an argument) in a GPU kernel launch. A weak reference table stores for every tracked array the host-out-of-date flag and for every GPU the corresponding device memory, together with the device-out-of-date flag.

The memory tracker has the following methods:

  1. Start tracking a host array
  2. Make an array up to date on a specific GPU
  3. Make an array up to date on host
  4. Make all arrays up to date on a specific GPU
  5. Make all arrays up to date on host

The default procedure is as follows: If an array is used in a kernel launch on a GPU the tracker makes the array up to date on that GPU by copying it to device memory just before the kernel is launched. After the kernel launch the tracker makes the array again up to date on the host by copying it back to the CPU memory. This very simple strategy always works but often leads to unnecessary memory transfers. The basic idea of our automatic memory management system is to defer the synchronization of a host arrays with its device counterpart to the point when the host array is actually accessed again. We implement this deferred synchronization strategy with code instrumentation, which inserts additional checks and memory tracker method calls at the right place.

Because instrumentation adds additional overhead we narrow down the ranges of instrumentation. A function can either be GpuManaged or GpuUnmanaged. By default, a function is GpuUnmanaged, which means that it does not defer the memory synchronization and thus its code is not instrumented. If a function has the GpuManaged attribute, we insert code and method calls to track the array access and defer the synchronization. At least, the functions Alea.Gpu.Launch and Alea.Gpu.For are GpuManaged.

Methods with the attribute GpuManaged are inspected in a post-build process. We check if a function contains IL instructions such as ldelem , ldelema , stelem, call Array.GetItem(), call Array.SetItem(), etc. to access a specific array. In this case we extract the array operand and insert code to defer its synchronization. A standard use case is a loop over all the array elements to set or modify them. In such a case we can optimize the tracking by creating local cache flags. Here is an example:

Instrumention produces code that is functionally equivalent to the following source code:

Calling a method like MemoryTracker.HostUpToDateFor() many times to check if an array has to be synchronized is generating a huge overhead. We use the flag to bypass the call once we know the array is synchronized and reset the flag after kernel launches. At the end of GpuManaged method, we will insert code to bring all out-of-date implicitly traced array back to host. A frequent case is calling other functions from a GpuManaged function. These other functions could be either GpuManaged or GpuUnmanaged. We need to notify the callee to defer memory synchronization. We use some mechanism to pass the managed session to the callee, so that it won’t bring back all out-of-date array to host, because it is not the end of the GpuManaged session.

The implementation relies on Mono.Cecil and Fody. There is a sketch of the full code instrumentation that is executed in a post build step:

  1. Load the compiled assemblies with Mono.Cecil through Fody
  2. For each GpuManaged function
    1. Add memory barrier code
      • for every array element access add cache flag and call to HostUpToDateFor()
      • for GpuManaged functions call SetFlagOnCurrentThread() before, reset all cache flags after
      • for GpuUnmanaged functions call HostUpToDateForAllArrays() before calling them
    2. Add try finally block and in finally call HostUpToDateForAll() if the caller is GpuUnmanaged
  3. Weave the modified assembly via Fody

Your Feedback

We hope that after reading this post you share the same excitement for the new upcoming version 3 of the Alea GPU compiler for .NET as we do.

Of course we are interested to hear all of your feedback and suggestions for Alea GPU. Write to us at info@quantalea.com or @QuantAlea on Twitter.

The features that we presented here are still in preview and might slightly change until we finally release version 3.

If you would like to already play around with Alea GPU V3 come and join us April 4 at GTC 2016 and attend our tutorial on simplified GPU programming with C#.

markov_model_7_state_policy

GPUs and Domain Specific Languages for Life Insurance Modeling

The Solvency II EU Directive came into effect at the beginning of the year. It harmonizes insurance regulation in the EU with an economic and risk based approach, which considers the full balance sheet of insurers and re-insurers. In the case of life insurers and pension funds, this requires the calculation of the economic value of the liabilities – the contractual commitments the company has to meet – for long term contracts.

Calculating the economic value of the liabilities and capturing the dependence of the liabilities to different scenarios such as movements of the interest rate or changes of mortality cannot be achieved without detailed models of the underlying contracts and requires a significant computational effort.

A Perfect GPU Use Case

The calculations have to be executed for millions of pension and life insurance contracts and have to be performed for thousands of interest rate and mortality scenarios. This is an excellent case for the application of GPUs and GPU clusters.

In addition variations in the products have to be captured. While implementing a separate code for many products is possible, a lot can be gained from abstractions at a higher level.

To solve these problem, we use the following technologies:

  1. The Actulus Modeling Language (AML), a domain specific language for actuarial modeling;
  2. Alea GPU, QuantAlea’s high performance GPU compiler for .NET C# and F#;
  3. The modern functional-first programming language F#.

Armed with these technologies we can significantly improve the level of abstraction, and increase generality. Our system will allow actuaries to be more productive and to harness the power of GPUs without any GPU coding. The performance gain of GPU computing makes it much more practical and attractive to use proper stochastic models and to experiment with a large and diverse set of risk scenarios.

The Actulus Modeling Language

The Actulus Modeling Language (AML) is a domain specific language for rapid prototyping in which actuaries can describe life-based pension and life insurance products, and computations on them. The idea is to write declarative AML product descriptions and from these automatically generate high-performance calculation kernels to compute reserves and cash flows under given interest rate curves and mortality curves and shocks to these.

AML allows a formalized and declarative description of life insurance and pension products. Its notation is based on actuarial theory and reflects a high-level view of products and reserve calculations. This has multiple benefits:

  • The specification of life insurance products and risk models can be handled by actuaries without programming background.
  • A uniform language for product description can guarantee coherence across the entire life insurance company.
  • Rapid experiments with product designs allow faster and less expensive development of new pension products.
  • The same product description can be used for prototyping and subsequent administration, reporting to tax authorities and auditors, solvency computations, etc.
  • The DSL facilitates the construction of tools for automated detection of errors and inconsistencies in the design of insurance products.
  • Reserve calculations can be optimized automatically for given target hardware, such as GPUs, via code generation.
  • Auditors and regulatory bodies such as financial services authorities can benefit from a formalization of products that is independent of the low-level software concerns of administration, efficient computations, and so on.
  • Products and risk models are independent of the technology on which computations are executed. Since pensions contracts are extremely long-lived – a contract entered with a 25-year old woman today is very likely to still be in force in 2080 – this isolation from technology is very useful.

Actuarial Modeling

The AML system is based on continuous-time Markov models for life insurance and pension products. A continuous-time Markov model consists of a finite number of states and transition intensities between these states. The transition intensity $\mu_{ij}(t)$ from state $i$ to state $j$ at time $t$, when integrated over a time interval, gives the transition probability from state $i$ to state $j$ during the time interval. The Markov property states that future transitions depend on the past only through the current state.

Life insurance products are modeled by identifying states in a Markov model and by attaching payment intensities $b_i(t)$ to the states and lump-sum payments $b_{ij}(t)$ to the transitions.

As an example we consider a product that offers disability insurance. The product can be modeled with three states: active labor market participation, disability, and death. There are transitions from active participation to disability and to death, and from disability to death. Another example is a collective spouse annuity product with future expected cashflows represented by a seven-state Markov model as follows:

markov_model_7_state_policy

Additionally, some products may allow for reactivation, where a previously disabled customer begins active labor again. The product pays a temporary life annuity with repeated payments to the policy holder until some expiration date $n$, provided that he or she is alive. The disability sum pays a lump sum when the policy holder is declared unable to work prior to some expiration $m$.

Reserves and Thiele’s Differential Equations

The state-wise reserve $V_j(t)$ is the reserve at time $t$ given that the insured is in state $j$ at that time. It is the expected net present value at time $t$ of future payments of the product, given that the insured is in state $j$ at time $t$. The principle of equivalence states that the reserves at the beginning of the product should be zero, or the expected premiums should equal the expected benefits over the lifetime of the contract.

The state-wise reserves can be computed using Thiele’s differential equation

$$ \frac{d}{dt} V_j(t) = \left(r(t) + \sum_{k, \, k\neq j} \mu_{jk}(t) \right) V_j(t) – \sum_{k, \, k\neq j} \mu_{jk}(t) V_k(t) – b_j(t) – \sum_{k, \, k\neq j} b_{jk}(t) \mu_{jk}(t) $$

where $r(t)$ is the interest rate at time $t$. Note that the parameters can be divided into three categories: those that come from a product ($b_j$ and $b_{jk}$), those that come from a risk model ($\mu_{jk}$) and the market variables ($r$).

Traditionally, it has often been possible to obtain closed-form solutions to Thiele’s differential equations and then use tabulations of the results. With the more flexible products expressible in AML, closed-form solutions are in general not possible. In particular, by allowing reactivation from disability to active labor market participation mentioned above, one obtains a Markov model with a cycle, and in general this precludes closed-form solutions.

Solving Thiele’s Differential Equations

Good numerical solutions of Thiele’s differential equations can be obtained using a Runge-Kutta 4 solver. A reserve computation typically starts with the boundary condition that the reserve is zero (no payments or benefits) after the insured’s age is 120 years, when he or she is assumed to be dead. Then the differential equations are solved, and the reserves computed, backwards from age 120 to the insured’s current age in fixed time steps.

Here is a code fragment of the inner loops of a simplistic RK4 solver expressed in C#.

It computes and stores the annual reserve backwards from y = a = 120 to y = b = 35, where dV is an array-valued function expressing the right-hand sides of Thiele’s differential equations, and h is the within-year step size, typically between 1 and 0.01.

Solving Thiele’s Equations on GPUs

The computations in the Runge-Kutta code have to be performed sequentially for each contract, consisting of the products relating to a single insured life. However, it is easily parallelized over a portfolio of contracts, of which there are typically hundreds of thousands, one for each customer. Thus, reserve and cash flow computations present an excellent use case for GPUs. Using GPUs for reserve or cash-flow computations is highly relevant in practice, because such computations can take dozens or hundreds of CPU hours for a reasonable portfolio size. Even with cloud computing this results in slow turnaround times; GPU computing could make it much more practical and attractive to use proper stochastic models and to experiment with risk scenarios.

The Runge-Kutta 4 solver fits the GPU architecture very well because it uses fixed step sizes and therefore causes little thread divergence provided the contracts are sorted suitably before the computations are started. By contrast adaptive-step solvers such as the Runge-Kutta-Fehlberg 4/5 or Dormand-Prince are often faster on CPUs. They are more likely to cause thread divergence on GPUs because different input data will lead to iteration counts in inner loop. Moreover, the adaptive-step solvers deal poorly with the frequent discontinuities in the derivatives that appear in typical pension products, which require repeatedly stopping and then restarting the solver to avoid a deterioration of the convergence rate.

In preliminary experiments, we have obtained very good performance on the GPU over a CPU-based implementation. For instance, we can compute ten thousand reserves for even the most complex insurance products in a few minutes. The hardware we use is an NVIDIA Tesla G40 GPU Computing Module (Kepler GK110 architecture). The software is a rather straightforward implementation of the Runge-Kutta fixed-step solver, using double precision (64 bit) floating-point arithmetics. The kernels are written, or in some experiments automatically generated, in the functional language F#, and compiled and run on the GPU using the Alea GPU framework.

The F# language is widely used in the financial industry, along with other functional languages. We use it for several reasons:

  1. It provides added type safety and convenience in GPU programming compared to C.

  2. F# is an ideal language for writing program generators, such as generating problem-specific GPU kernels from AML product descriptions.

  3. The project’s commercial partner uses the .NET platform for all development work, and F# fits well with that ecosystem.

For these reasons the Actulus project selected Quantalea’s Alea GPU platform to develop our GPU code. We find that the Alea GPU platform offers excellent performance and robustness. An additional benefit of Alea GPU is its cross platform capability: the same assemblies can execute on Windows, Linux and Mac OS X.

The chief performance-related problems in GPU programming are the usual ones: How to lay out data (for instance, time-dependent interest rate curves and mortality rates) in GPU memory for coalesced memory access; whether to pre-interpolate or not in such time-indexed tables; how to balance occupancy, thread count and GPU register usage per thread; and so on. Alea GPU is feature complete so that we can implement all the required optimizations to tune the code for maximal performance.

The following graphics shows the number of products processed per second as a function of the batch size, i.e., the number of products computed at the same time:

GPU Throughput and Speedup

The product in question is a collective spouse annuity product with future expected cashflows calculated for a 30-year old insured represented by the seven-state Markov model depicted above. This product is among the most complex to work with. Depending on the modelling details, the current CPU-based production code, running on a single core at 3.4 GHz, can process between 0.75 and 1.03 collective spouse annuity insurance products per second. If we compare this with the GPU throughput of 30 to 50 insurance products per second we arrive at a speed-up factor in the range of 30 to 65.

Benefits of using F#

The computation kernels are implemented in F# using work flows (also known as computation expressions or monads) and code quotations, a feature-complete and flexible way of using the Alea GPU framework. In our experience the resulting performance is clearly competitive with that of raw CUDA C code.

Using F# through Alea GPU permits much higher programmer productivity, both because F#’s concise mathematical notation suits the application area, and because F# has a better compiler-checked type system than C. For instance, the confusion of device pointers and host pointers that may arise in C is avoided entirely in F#. Hence much less time is spent chasing subtle bugs and mistakes, which is especially important for experimentation and exploration of different implementation strategies. The core Runge-Kutta 4 solver looks like this, using code quotations and imperative F# constructs:

At the same time, F#’s code quotations, or more precisely the splicing operators, provide a simple and obvious way to inline a function such as GMMale into multiple kernels without source code duplication:

While similar effects can be achieved using C macros, F# code quotations and splice operators do this in a much cleaner way, with better type checking and IDE support. What is more, F# code quotations allow kernels to be parametrized with both “early” (or kernel compile-time) arguments such as map, and late (or kernel run-time) arguments such as n and isPremium:

Future benefits of using F#

An additional reason for using F# is that in the longer term we want to automatically generate the GPU kernels that solve Thiele’s differential equations. The input to the code generator is a description of the underlying state models (describing life, death, disability and so on) and the functions and tables that express age-dependent mortalities, time-dependent future interest rates, and so on. As a strongly typed functional language with abstract data types, pattern matching, and higher-order functions, the F# language is supremely suited for such code generation processes. The state models and auxiliary functions are described by recursive data structures (so-called abstract syntax), and code generation proceeds by traversing these data structures using recursive functions.

Also, the F# language supports both functional programming, used to express the process of generating code on the host CPU, and imperative programming, used to express the computations that will be performed on the GPU. In other words, high-level functional code generates low-level imperative code, both within the same language, which even supports scripting of the entire generate-compile-load-run cycle:

The code generation approach will help support a wide range of life insurance and pension products. There are of course alternatives to code generation: First, one might hand-write the differential equations for each product, but this is laborious and error-prone and slows down innovation and implementation, or severely limits the range of insurance products supported. Secondly, one might take an interpretive approach, by letting the (GPU) code analyze the abstract syntax of the product description, but this involves executing many conditional statements, for which the GPU hardware is ill-suited as it may lead to branch divergence. Hence code generation is the only way to support generality while maintaining high performance.

Thanks

This work was done in the context of the Actulus project, a collaboration between Copenhagen University, the company Edlund A/S, and the IT University of Copenhagen, funded in part by the Danish Advanced Technology Foundation contract 017-2010-3. Thanks are due to the many project participants who contributed to AML and in particular due to Christian Gehrs Kuhre and Jonas Kastberg Hinrichsen for their many competent experiments with GPU computations for advanced insurance products. Quantalea graciously provided experimental licenses for Alea GPU and supported us in various GPU related aspects.

About the Authors

Dr. Peter Sestoft is professor of software development at the IT University of Copenhagen. His research focuses on programming language technology, functional programming (since 1985), and parallel programming, in particular via declarative and generative approaches.

Dr. Daniel Egloff is partner at InCube Group and Managing Director of QuantAlea, a Swiss software engineering company specialized in GPU software development. He studied mathematics, theoretical physics and computer science and worked for more than 15 years as a quant in the financial service industry.

Follow @EgloffDaniel and @QuantAlea on Twitter.

[Alea GPU] http://www.quantalea.com

[Christiansen 2014] Christiansen, Grue, Niss, Sestoft and Sigtryggsson: An Actuarial Programming Language for Life Insurance and Pensions. International Congress for Actuaries 2014, Washington DC.

QuantAlea_cube_blau-grau

Webinar on how to Accelerate .NET Applications with GPUs

Software companies use frameworks such as .NET to target multiple platforms from desktops to mobile phones with a single code base in order to reduce costs by leveraging existing libraries and to cope with changing trends. While developers can easily write scalable parallel code for multi-core CPUs on .NET, they face a bigger challenge using GPUs to tackle compute intensive tasks.

Alea GPU closes this gap by bringing GPU computing directly into the .NET ecosystem.

Register here for a free webinar where you can learn how to write great cross platform GPU accelerated .NET applications in any .NET language much easier than ever before.

QuantAlea_cube_blau-grau

Alea GPU 2.1 Released

Not too long ago we released the new Alea GPU 2.0 release, which was a major step forward for GPU computing on .NET. Today we can announce Alea GPU 2.1. It is a maintenance release but also brings some new interesting features.

First of all Alea GPU 2.1 has integrated cuDNN, a GPU-accelerated library of primitives for deep neural networks, which are very much in vogue these days.

The new version also supports printing from GPU kernels, either with printf/printfn in F# or Console.Write/Console.WriteLine in C# based GPU kernels. This is a very handy tool for quickly debugging GPU kernels or understand them more thoroughly.

Also important is supporting IntPtr in malloc and int64 indexing, which allows to address device memory beyond the 4GB boundary.

Finally, some experts requested support for atomicCAS and __shfl_xor. Unfortunately atomicCAS has an issue on Linux which could not be resolved in time. We hope it will be fixed with the upgrade to CUDA 7.5, which will be released this summer.

The Alea Tutorial will be updated soon as well with some examples how to use cuDNN directly with Alea GPU.

NBodiesLast

Play with Particles III – Visualize using OpenGL

In the last two posts we implemented three different solvers for the n-body problem, one on the CPU and two different GPU versions. In this post we visualize the n-body simulation graphically with OpenGL to illustrate the cross platform capabilities of Alea GPU and its interoperability with OpenGL.

A Simple OpenGL Example with OpenTK

There exist essentially two different 3d visualization technologies: Microsoft’s DirectX and OpenGL. DirectX is targeting the Windows platform. Examples showing the interoperability of Alea GPU and DirectX can be found in the sample gallery of Alea GPU. The benefit of OpenGL is platform independence. A good starting point is the OpenGL tutorial.

In this blog article we use OpenGL through OpenTK which wraps OpenGL for .NET. You might find the OpenTK documentation, the OpenTK tutorial, how to set up OpenTK, and this OpenTK example useful resources.

We introduce OpenTK with a simplistic example, which renders a triangle. Add the OpenTK NuGet package to your project and reference System.Drawing. Then open the following namespaces:

Create a new class OpenTKExample which inherits from the OpenTK class GameWindow:

Then overwrite the following methods:

On load set the Background to DarkBlue:

On resize of the window use the whole area to paint and set the projection:

On render the frame is where all the drawing happens. Clear the buffer and draw a triangle and three points with different colors. Instead of a triangle we could also draw many different other figures such as points, lines, etc. End the triangle mode and swap the buffers.

Add a function creating an instance of our OpenTKExample class and running it:

The result is a colored triangle on a dark blue background:

OpenTKTriangle

We will use the same structure in order to display our particles.

Displaying Particles Directly on the GPU

The main difference between the simple example above and the n-body simulation is that in case of the n-body simulation the data already resides in GPU memory and we do not want copy it from GPU to CPU and back to an OpenGL buffer to finally display the particles. This needs some infrastructure code. First we show how to create two buffers accessible from OpenGL and Alea GPU, in which we save our positions:

  1. Generate an array consisting of GLuint.
  2. Create a buffer using GL.GenBuffers.
  3. For every element of the array:
    1. bind the buffer;
    2. allocate the memory;
    3. get the buffer-size;
    4. unbind the buffer;
    5. register the buffer with cuGLRegisterBufferObject available from Alea GPU.

We now have two buffers which can be accessed from OpenTK and Alea GPU. We also need some resource pointers corresponding to the buffers. We obtain them by calling the Alea GPU function cuGraphicsGLRegisterBuffer inside a cuSafeCall:

If we work with the buffers outside of OpenTK we need to lock their positions. We therefore write a function lockPos which locks the positions, calls a function f on the positions and unlocks the positions again:

To share the buffers we require an Alea.Worker on the same context as used by OpenTK. The following function creates a CUDA context on the machine’s default device:

We use this function to create an Alea.Worker on the same context using the same CUDA device.

We can now initialize the positions using the lockPos function and our newly generated worker:

Recall that we read from oldPos and write to newPos in the GPU implementation. We need to swap the buffers before each integration step using the function swapPos:

In the OnRenderFrame method we swap the buffers and perform an integration step:

We bind the buffer and draw the positions:

Implementation Details

We point out some implementation details. To use the different GPU implementations and test them for different block sizes we introduce a queue of ISimulator objects. During simulation we walk through the queue with an “S” key down event.

We create the simulators and put them into the queue. Note that we also return a dispose function to clean up the simulators at the end:

Here is the logic to switch between simulators:

We use Matrix4.LookAt to inform OpenTK that our viewing position is (0,0,50) and that the viewing direction is along the z axis:

These additional comments should be helpful to understand how the positions are displayed using OpenTK and how the data is directly read from GPU memory. The previous blog posts explain the physics, the CPU implementation, the two GPU implementations and their differences. All that remains is to run the example.