Programing experiments

Metro App and Effect

Today I thought it is time I get better with shader language (HLSH)!

One problem I had with shaders so far, it is that most of sample on the web use Effects. Sadly, not only Effects are not part of the DirectX SDK (or Windows SDK where it has been integrated on Windows 8), but they are just not allowed in Metro app, where you can’t use the runtime shader compiler, (allegedly ^^) for security reason.

Thankfully Frank Luna came to the rescue! (Frank Luna is the author of the DirectX I am currently learning from).

He posted a .pdf on how to port an example of his book to Metro here, there is a section on effect towards the bottom.

Just in case Frank Luna move his files around I also download it and attached it below!


Also, in shaders the in / out vertex structure are tagged with semantic names which seem to be part of a well known predetermined lot. A bit of Googling found the list of semantic names, good to know, here is the link on MSDN!


Lastly, in most sample I have seen or read so far (it’s not that many, granted! ^^) Shader writer seems to have something against conditional statements.. :~

They use effect to compile multiple shaders at once, depending on some condition (i.e. one effect = multiple shader!!).

But in WinRT there is no effect, dynamic linkage is not supported either.

I decided I will write shaders with conditional statements!! Will see how it goes! ^^

Tags: ,
Categories: WinRT | DirectX
Permalink | Comments (0) | Post RSSRSS comment feed

Hit Testing

Hit Testing

This time for my DirectX self training I implemented hit testing, frustum culling and octtrees.

On the figure above the red triangle is the triangle that the user hit with the mouse. All the code is in CodePlex, at http://directwinrt.codeplex.com/ and the screenshot is from Sample2_D2D.



As I described in my previous blog entry, calling into C++ is costly, even with WinRT C++/CX. By that I mean crossing the ABI divide between .NET and C++ is (slightly) expensive, not that C++ itself is slow!

It’s all the motivation it took for me to port all my math code to C#.

Furthermore, hit testing need be done in code, i.e. there is no help from DirectX here.

The math being rather long to explain I will just explain what is hit testing, frustum culling and outline the basic steps involved in doing it, and let the user refer to the full source code on CodePlex and Google for an explanation of the inner mathematical working when one is needed.


But first, a word of warning,


Note on coordinate system

When one peruse at the code, one might believe there are some elementary sign error in my math, and while it could be there are 2 initial source of confusion to be aware of!


In 3D, DirectX use a left oriented coordinate system! (as described on MSDN), whereas in normal math courses the right coordinate system is used!



Also, another source of confusion, while the Y direction goes up in 3D (as seen above) it goes down in 2D!



The ray struct represent a half line with a point of origin and a direction

public struct Ray
    public Vector3F Origin;
    public Vector3F Direction;

    public static Ray operator*(Matrix4x4F m, Ray r) { /* */ }

It can also be applied a space transformation matrix to change its coordinate system to that of a particular object of interest.


When the user click on the screen, the code can ask the camera to create a shooting ray from the camera location in the direction of the point clicked on screen:

public class Camera : ModelBase
    public Ray RayAt(DXContext ctxt, float x, float y) { /* */ }

And then it can check if this ray intersect any geometry and how far they are from the ray’s source. This is the process of “hit testing”, testing that when the user click the screen, it might have “hit” a geometry under the mouse.


Rays have different method to calculate if they intersects with some objects and the distance to a given mesh (list of triangles)

public struct Ray
    public bool IntersectBox(Box3D box) { /* */ }
    public bool IntersectSphere(Vector3F p, float radius) { /* */ }
    public bool IntersectTriangle(Triangle3D triangle, out float dist) { /* */}

    public HitMeshResult IntersectMesh(IEnumerable<Triangle3D> mesh) { /* */ }
    public struct HitMeshResult
        public bool Hit;
        public float Distance;
        public int Triangle;

Of course one should make sure that the ray and mesh are in the same coordinate system!
Often the mesh is loaded from a model and not edited after that, instead a world matrix transformation is applied to it. In such case one should multiple the Ray by the inverse of the object’s world transformation (provided the space is not scaled) before calculating intersection distance.


Doing an IntersectMesh on a big mesh (i.e. a mesh with lots of triangles) is expensive (the method is going to compute an intersection distance on each and all triangle of the mesh). To improve that various technique can be employed:

  • First check that the bounding box of the mesh is hit (it’s a quick eliminating test)
  • Place the mesh in some kind of spatial index and only test meshes that are likely to be hit, I detail this idea below in my section about OctTree.
  • Index the triangle of the mesh itself in a spatial index, often an AABBTree (AABB: Axis Oriented Bounding Box), to quickly find out the triangle that should be hit tested instead of enumerating them all. This I didn’t implement at this time. Remark those particular indexes might be what make AABBTee / OctTree example on the internet rather complicated: the extra mesh’s vertice info.



AABB Tree, Oct Tree, Quad Tree, they are all space binary search tree; 3D space for OctTree, 2D space for QuadTree and any D for AABBtree. The only difference between each of them is the way a tree node divide the space amongst its children.

While we are on topic, I just wanted to mention here, BSP are different, they will make an ordered tree of object (instead of partitioning space).

A QuadTree node will divide the 2D space amongst its children node in 4 equal size rect. And OctTree will divide 3D space in 8 equal size wedges. An AABB tree will divide it in 2, along the longest axis for this node. Below is an example quad tree:


Seeing they all behave more or less the same I have a SpatialIndex<TItem, TBox> base class, subclasses only need to define how they partition the space. I only implemented QuadTree and OctTree so far, but I did took a page from the AABB tree and I don’t always divide space in 4 and 8 (respectively) but apply some heuristic. Consequently insert might be more costly… But it should improve memory and keep same or better query time and it will also improve some insert by making more meaningful box (i.e. equally sized, if possible).

Here is the relevant part of the implementation

public abstract class SpatialIndex<TItem, TBounds> : ICollection<TItem>
    where TItem : IHasBounds<TBounds>
    where TBounds : IBounds
    public IEnumerable<TItem> Query(TBounds r) { /**/ }
    public IEnumerable<TItem> Query(Predicate<TBounds> intersects) { /**/ }

public class OctTree<T> : SpatialIndex<T, Box3D>
    where T : IHasBounds<Box3D>
{ /*  */ }

public class QuadTree<T> : SpatialIndex<T, Box2D>
    where T : IHasBounds<Box2D>
{ /* */ }

public interface IHasBounds<T>
    where T : IBounds
    T Bounds { get; }

public interface IBounds
    bool Contains(IBounds b);
    bool Intersects(IBounds b);
    float MaxLength { get; }

As you can see both OctTree and QuadTree are collections of item with bounds.

And they have (efficient) method to query object they contain with an intersection box or an intersection predicate.
Remark the tree is thread safe for reading and querying.


Remark this seem simple, because it is!
Most OctTree/AABB tree on the internet are complicated because they are used to index a mesh’s triangle to further optimize hit testing by only testing some triangle of a mesh instead of all triangles. This is something I left for later.


Finally OctTree brought me to the next enhancement. When one render a scene we want to render only what’s needed (for performance! DirectX can sort out what’s not visible, but it costs some time).


Frustum culling

A frustum is a portion of a solid that lies between 2 parallel planes cutting it.


In 3D graphic the frustum refer to the area that is viewed by the camera. And frustum culling mean only drawing what is in that area, culling the rest.

Again one can query the camera for the frustum. One can even do rect selection on the screen by passing a 2D box.

public class Camera
    public Frustum GetViewFrustum(DXContext ctxt, Box2D area) { /***/ }
    public Frustum GetViewFrustum() { /***/ }

Because frustum define intersection methods it can be used to query an OctTree for visible object and only render what’s needed as in:

public override void Render(D3DGraphic g)
    // setup graphic...
    var frustum = CameraController.Camera.GetViewFrustum();
    foreach (var item in octTree.Query(b => frustum.Intersects(b)))
        // draw the item ...


Hit Testing a scene in … Parallel

Now we got an OctTree to to index item by location, some method to test hitting, there is one last optimization that come to mind when doing hit test, how about parallelizing it?

The idea is to hit test each mesh with an hit ray in its own thread and aggregating result in the end. There is one special trick, we need to return some data!
There is Parallel.ForEach method for that! Basically it pass a value around at each iteration to aggregate the result and the code should provide a final aggregation method to aggregate all those aggregate result when all the elements have been tested!

Here is a relevant code fragment:

void HitTest()
    // ...
    // the ray of interest
    var ray = RayAt(ctxt, x, y);

    // will be used to coordinate threads when aggregating final results
    object hitlock = new object();
    // value that are calculated
    float min = 0;
    hitMesh = null;
    // let's hit, REMARK the octTree query!
    Parallel.ForEach<Tuple<HitItem, Ray.HitMeshResult>>(octTree.Query(b => ray.IntersectBox(b)),
        () => null, // original result
        (it, state, local) =>
            // convert ray to mesh coordinate,
            // instead of converting ALL vertices to world transform!!!
            var mray = it.Transform.Invert() * ray;
            var result = mray.IntersectMesh(EnumSphereTriangle());
            // no hit? return current result
            if (!result.Hit)
                return local;

            // hit!, now let's do (thread local) aggregate

            // no previous result
            if (local == null)
                return Tuple.Create(it, result);
            // merge with previous result
            if (result.Distance < local.Item2.Distance)
                return Tuple.Create(it, result);
            return local;
        r =>
            if (r == null)
            lock (hitlock)
                // now all thread have completed, merge the results
                if (hitMesh == null || min < r.Item2.Distance)
                    hitMesh = r.Item1;
                    iHitTriangle = r.Item2.Triangle;




This times I described Hit testing, Frustum culling and OctTrees, which help make those operation faster by indexing the space. I provided some code fragment to show how they are used.

There is also a fully functional hit test sample in the CodePlex repository, http://directwinrt.codeplex.com/: Sample2_HitTesting.

See you next time! :)

Tags: ,
Categories: C# | DirectX
Permalink | Comments (0) | Post RSSRSS comment feed

D2D Progress

For my DirectX WinRT wrapper (now on CodePlex http://directwinrt.codeplex.com/) I took a break of D3D as I had some problem with it (more on that later) and implemented most of D2D, it was easy! ^^

And I also made the initialization even easier and more flexible.

Screenshot of Sample_D2D

So today I will write a quick how to start (with D2D) and about my latest D3D breakthrough (which is just some  C++/CX – C# technicality)


Initializing DirectX Graphics

To start DirectX graphic (with my API) one just initialize a DXContext and set a target, like so

var ctxt = new DXContext();
ctxt.Target = (IRenderTarget) ...;

There are 3 types of target to choose from (so far):


Target can even be changed while drawing (for example draw first on a Texture2D, which then can be used in the final scene).

DXTargetSwapPanelOrWindow wrap a SwapChainBackgroundPanel in XAML application.
Of interest DXUtils.Scenes.ScenePanel initialize DirectX, create a Target and call render on every frame.

Then one can draw, for each frame, with pseudo code like that

void Draw(DXContext ctxt)
    ctxt.Render = true;

    var g3 = new D3DGraphic(ctxt)

    var g2 = new D2DGraphic(ctxt);

    ctxt.Render = false;



Remark at this stage one can download the source (http://directwinrt.codeplex.com/) and have a look at the D2D sample: Sample2_D2D.


This weekend and week I wrapped most of D2D API. To be precise I wrapped the following:
- geometries, brushes, bitmap, stroke style, text format, text layout, transforms

(Missing, from the top of my head, would be 2d effects, glyph run and lots of DWrite)



When drawing a scene (whether D2D or D3D), for performance reason, one should initialize all items first and just call the draw primitives while rendering.

For example lets create a few geometry and brushes

public MyD2DScene()
    bImage = new BitmapBrush { Bitmap = new Bitmap(), };

    bYRB = new LinearGradientBrush
        StartPoint = new Point(100, 100),
        EndPoint = new Point(800, 800),
        GradientStops = new []
            new GradientStop { position = 0, color = Colors.Yellow },
            new GradientStop { position = 0.4f, color = Colors.Red },
            new GradientStop { position = 1, color = Colors.Blue},

    gPath = new PathGeometry();
    using (var sink = gPath.Open())
        sink.BeginFigure(new Point(), FIGURE_BEGIN.FILLED);
        sink.AddLines(new[] {
            new Point(25, 200),
            new Point(275, 175),
            new Point(50, 30),



In that scene snippet I created an image brush from a resource image, a gradient brush and a simple path geometry.



Now I can just render all my items with just few drawing command, like so (for the geometry):

public void Render(DXContext ctxt)
    var g = new D2DGraphics(ctxt);

    g.Transform = DXMath.translate2d(pSkewedRect);
    g.FillGeometry(gPath, bImage);
    g.DrawGeometry(gPath, bYRB, 12, null);



Now that was easy and I got back to try to solve my dissatisfaction with my current implementation of D3D. Mostly that the C# developer is (currently) limited to the vertex buffer that I hard coded in the API.

And then I had a breakthrough. Let’s create a native pointer wrapper and write some code on the C# side that make it strongly typed.

My first attempt looked like that:

public ref class NativePointer sealed
    uint32 sizeoft, capacity;
    void* data;

    property void* Ptr { void* get(); }

    NativePointer(uint32 sizeOfT);
    virtual ~NativePointer();

    property Platform::IntPtr Data { Platform::IntPtr get(); }
    property uint32 SizeOfT { uint32 get(); }
    property uint32 Capacity;
    void Insert(uint32 offset, uint32 count);
    void Remove(uint32 offset, uint32 count);

That look promising, it even compiled! But then… WTF!!! Platform::IntPtr doesn’t cross ABI!!
Damn you Microsoft!

Then I had a breakthrough, what about… size_t ?!

It’s a perfectly ordinary type, except for the little twist that it projects to int32 when compiling for x86 and int64 when compiling for x64! It worked just fine, sweet!

So this is the change:

property size_t Data { size_t get(); }

On the C# side I was originally hoping to have a generic Pointer<T> class and finally use some unsafe C#.

Well I did use the unsafe C#, but I couldn’t compile code like that

public unsafe static T Get<T>(IntPtr p, int pos)
    T* pp = (T*)p;
    return pp[pos];

The compiler returns the error

Cannot take the address of, get the size of, or declare a pointer to a managed type ('T')

But it did accepts

public unsafe static int GetInt(IntPtr p, int pos)
    int* pp = (int*)p;
    return pp[pos];


In the end I create an abstract BasePointer<T> and a .tt template that generate all the template that I need!

Now I just have to implement a class like XNA’s VertexDeclaration and I will get a (reasonably?) solid base to go on…

And also rewrite all my buffers to use this native pointer class.

That’s it for today!
And remember: http://directwinrt.codeplex.com/

Tags: , , ,
Categories: .NET | DirectX | WinRT | C#
Permalink | Comments (0) | Post RSSRSS comment feed

DirectX and WinRT continued



Here are my latest developments at writing a clean WinRT component exposing a clean yet complete DirectX D3D (and maybe D2D as well) API to C#.

There would be a (not so) small part about WinRT C++/Cx, generic / template and the rest will be about DirectX::D3D API and my component so far.
I can already tell now that the DirectX initialization and drawing has become yet even simpler than my previous iteration while being more flexible and closer to the native DirectX API!

Finally I want to say my main learning material (apart from Google, the intertube, etc..) is Introduction to 3D Game Programming with DirectX 11 by Frank D. Luna. His source code can be found at D3Dcoder. My samples are loosely inspired by his, I wrote my own math lib for example.


1. Exposing C++ array to C#

In DirectX there are many buffers. Shape’s vertices is a typical one. One want to expose a strongly typed array which can be updated by C# and with unlimited access to the underlying data pointer for the native code as well, of course.

Platform::Collection::Vector<T> wouldn’t do. Maybe it’s just me but I didn’t see how to access the buffer’s pointer. Plus it has no resize() method. Platform::Array<T> is fixed size as far as C# code is concerned.

I decided to roll my own templated class.

1.1. Microsoft templated collections

The first problem is it’s not possible to create generic definition in C++/Cx. One can use template but they can’t be public ref class, i.e. can’t be exposed to non C++ API. But there is a twist. It’s possible to expose concrete implementation of Microsoft’s C++/Cx templated interfaces.
There are a few special templated interface with a particular meaning in WinRT. The namespace Windows::Foundation::Collections contains a list of templates that will automatically be mapped to generic collection type in the .NET runtime.

For instance by defining this template:

template <class T>
ref class DataArray sealed : Windows::Foundation::Collections::IVector<T>
    // implementation of IVector<T>

    std::vector<T> data;

I have a class that I can return in lieu of IVector<T> (which will be wrapped into an IList<T> by the .NET runtime) and I can directly manipulate its internal data, even get a pointer to it (with &data[0])

1.2. Concrete implementation

This is a first step, but I need to provide concrete implementation. Let’s say I want to expose the following templated class

template <class T>
ref class TList
    TList() : data(ref new DataArray<T>()) {}
    property IVector<T> List { IVector<T> get() { return data; }
    public void Resize(UINT n) { data->data.resize(n); }

    DataArray<T>^ data;

I can’t make it public, but I can write a concrete implementation manually as I need it, and simply wrap an underlying template, as in:

public ref class IntList sealed
    IntList() {}
    property IVector<int> List { IVector<int> get() { return list.Data; } }
    public void Resize(UINT n) { list.Resize(n); }

    TList<int> list;

But this is quickly becoming old!
What if I use an old dirty C trick, like… MACRO! I know, I know, but bear with me and behold!

public ref class CLASSNAME sealed\
    CLASSNAME() {}\
    property IVector<TYPE> List { IVector<TYPE> get() { return list.Data; } }\
    public void Resize(UINT n) { list.Resize(n); }\
    TList<TYPE> list;\
TYPED_LIST(Int32List, int)
TYPED_LIST(FloatList, float)

At the end of this snippet I just declared 3 strongly typed “list” items in 3 lines!
All the code is just a no brainer simple wrapper and it will also be easy to debug, as the code will immediately step inside the template implementation!

It’s how I implemented all the strongly typed structure I need for this API. And I can easily add new one as I need them in just a single line, as you can see!! ^^.


2. The DXGraphic class

The BasicScene item in my previous blog post was quickly becoming a point of contention as I was trying extend my samples functionality. In the end I had a breakthrough, I dropped it and created a class called DXGraphic which is really a wrapper around the ID3D11DeviceContext1 and expose drawing primitives, albeit in simpler (yet just as complete) fashion, if I could.

All other class are to be consumed by it while drawing. Here is what the current state of my native API looks like so far:


One just create a DXGraphic and feeds it drawing primitive. For those who are new to DirectX it’s a good time to introduce the the DirectX rendering pipeline as described on MSDN.


The pipeline is run by the video card and process a stream of pixel. Most of the DirectX API is used to setup data for this pipeline: vertex, texture, shader variable (constant buffer), etc.. That must be copied from CPU memory to video card memory. And then they would be processed by the shaders, which are some simple yet massively parellelized program which process each individual vertices and turn them into pixels. In a way they are the real drawing programs, the rest is set-up.

At least 2 of these shaders must be provided by the program: the vertex shader and the pixel shader. The vertex shader will convert all the vertex in the same coordinate system in box of size 1 (using model, view and projection matrices), and the pixel shader will output color for a given pixel.

2.1. The classes in the API (so far)

Shaders (pixel and vertex so far) are loaded by the PixelShader and VertexShader class. I used shaders found in Frank Luna sample so far and haven’t written my own. Here is the MSDN HLSL programming guide, and here is an HLSL tutorials web site.

The PixelShader also takes a VertexLayout class argument. Which describe the C++ structure in the buffer to the shader. I’m only using BasicVertex class so far. In the (strongly typed buffer class) CBBasicVertex, CBBasicVertex.Layout return the layout for BasicVertex.

I have some vanilla state class, RasterizerState can turn on/off wireframe and setup face culling.

BasicTexture can load a picture.

Finally shapes are defined by one (or, optionally) many vertices data buffer and (optionally) an index buffer. I used strongly typed one: VBBasicVertex, IBUint16/32. They can be created manually or I have an helper class, MeshLoader to create some.

One of the sample update the vertex buffer with C# on each rendering frame!

MeshLoader will also returns whether the shape is in right or left handed coordinate system. DirectX use left handed, but some model are right handed. The ModelTransform class takes care of that, as well as scaling, rotation and translation.

To draw, one setup shaders, states. Then enumerate all shapes, set its texture, its shape and call draw.

Also one can pass variable to shaders (i.e. computation parameters) by using strongly typed constant buffer. A few are defined, CBPerFrame (contains lighting info), CBPerObject (contains model, view and projection matrices).

2.2. The context watcher

There is a private class used by almost all class in this API: ContextWatcher.

Most class in this API have buffers or data that are DirectX context bound and need to be reset when the context is destroy, recreated when it is, etc. This class take care of the synchronization. It is important to understand it before hacking this library.


3. Input and Transform

3.1. Input

To handle input I use a couple of method / events from the CoreWindow class which are wrapped in my InputController class.

GetKetStates(params VirtualKeys[]) will use CoreWindow.GetAsyncKeyStates().

GetTrails() will return the latest pointer down events. On Windows 8 mouse, pen, etc.. have been superseded by the more generic concept of “pointer” device as explained on MSDN.

The CameraController will use the InputController to move the camera and/or model around.

HOME key will reset the rotation, LEFT CONTROL will move model. MOUSE WHEEL will move the camera on the Z axis. Mouse Drag will rotate the camera or model (if LEFT CONTROL is on) using the following rotation:


i.e. if M1 (x1,y1,0) is the mouse down point, and M2 (x2,y2,0) is the next drag point and O (x1, y1, –screenSize) is a virtual point above M1.
The camera controller calculate the rotation that transform OM1 into OM2 and apply its opposite to the camera. The opposite because dragging the world right is like moving the camera left.


3.2. Coordinate System

Initially I was keeping the camera and model transforms as matrices (along those line on MSDN). Unfortunately when I introduced mouse handling to drag the model. Continuously multiplying model matrix by mouse transform matrices introduced unsightly numerical errors. Particularly shear transformations.


After much tinkering I settled on representing the model transformation as follow:

ModelTransform = Translation * Rotation (as quaternion) * Scaling

One can multiply quaternion together and there would be some small numerical error but it will remain a rotation!

Quaternion can be created with the DXMath class:

public static quaternion toQuaternion(float x, float y, float z, float degree);

(x,y,z) being the axis of rotation.

About quaternion math (as I didn’t learn it at school :~) I found the following links:

In the end all transformation are nicely wrapped in some class in Utils\DirectX


Camera is the typical DirectX camera.

Model is the typical DirectX model transformed decomposed in Translation, Rotation, Scaling. There is also a LeftHanded property as it should be handled differently whether the model’s coordinate are in left handed or right handed space.

The Transforms class is a utility class to create transform matrix.

CenteredRotationTransform is used to rotate the model around a point, that can be moved.


4. Wrapping it all together

To show what the final code look like here is the slightly simplified code that setup the scene with the column (2nd screen shot).

Even if it’s long it’s much simpler than the C++ version, and just as versatile!

public static Universe CreateUniverse4(DXContext ctxt = null, SharedData data = null)
    ctxt = ctxt ?? new DXContext();
    data = data ?? new SharedData(ctxt);

    var box = new BasicShape(ctxt, MeshLoader.CreateBox(new float3 { x = 1, y = 1, z = 1 }));
    var grid = new BasicShape(ctxt, MeshLoader.CreateGrid(20, 30, 20, 20));
    var gsphere = new BasicShape(ctxt, MeshLoader.CreateGeosphere(1, 2));
    var cylinder = new BasicShape(ctxt, MeshLoader.CreateCylinder(0.5f, 0.3f, 3, 20, 20));

    var floor = data.Floor;
    var bricks = data.Bricks;
    var stone = data.Stone;

    float3 O = new float3 { z = 30 };

    var u = new Universe(ctxt)
        Name = "Temple",
        Background = Colors.DarkBlue,
        Camera =
            EyeLocation = DXMath.vector3(0, 0.0f, 0.0f),
            LookVector = DXMath.vector3(0, 0, 100),
            UpVector = DXMath.vector3(0, 1, 0),
            FarPlane = 200,
        CameraController =
            ModelTransform = { Origin = O },
        PixelShader = data.TexPixelShader,
        VertexShader = data.BasicVertexShader,
        Bodies =
            new SpaceBody
                Location = O,
                Satellites =
                    new SpaceBody
                        Shape = grid,
                        Texture = floor,
                    new SpaceBody
                        Scale = DXMath.vector3(3,1,3),
                        Location = new float3 { y = 0.5f },
                        Shape = box,
                        Texture = stone,
    var root = u.Bodies[0];
    for (int i = 0; i < 5; i++)
        root.Satellites.Add(new SpaceBody
            Location = DXMath.vector3(-5, 4, -10 + i * 5),
            Shape = gsphere,
            Texture = stone,
        root.Satellites.Add(new SpaceBody
            Location = DXMath.vector3(+5, 4, -10 + i * 5),
            Shape = gsphere,
            Texture = stone,
        root.Satellites.Add(new SpaceBody
            Location = DXMath.vector3(-5, 1.5f, -10 + i * 5),
            Shape = cylinder,
            Texture = bricks,
        root.Satellites.Add(new SpaceBody
            Location = DXMath.vector3(+5, 1.5f, -10 + i * 5),
            Shape = cylinder,
            Texture = bricks,

    return u;

And the render method that render all samples so far

public void Render(DXGraphic g)


    g.SetConstantBuffers(0, ShaderType.Pixel | ShaderType.Vertex, cbPerObject, cbPerFrame);

    foreach (var item in GetBodies())
        if (item.Shape == null)

        cbPerObject.Data[0] = new PerObjectData
            projection = CameraController.Camera.Projection,
            view = CameraController.Camera.View,
            model = DXMath.mul(CameraController.ModelTransform.Transform, item.FinalTransform.Transform),
            material = item.Material,

        var p0 = new float3().TransformPoint(item.FinalTransform.Transform);
        var p1 = p0.TransformPoint(CameraController.Camera.View);
        var p2 = p1.TransformPoint(CameraController.Camera.Projection);



        g.SetShape(item.Shape.Topology, item.Shape.Vertices, item.Shape.Indices);


5. Performance remarks

On my machine the app spend about 6 seconds loading textures at the start. However if I target x64 when compiling (my machine is an x64 machine, but the project targets x86 by default) the startup drop to about 0.2 seconds!!!

Also, in 32 bits mode the app will freeze every now and then while catching a C++ exception deep down the .NET runtime-WinRT binding code (apparently something to do with the DirectArray) but on x64 it runs smoothly.

Tags: , , ,
Categories: WinRT | DirectX | .NET
Permalink | Comments (0) | Post RSSRSS comment feed

DirectX made simple

With Windows 8, WinRT, C++/Cx I think the time to write an elegant C# / XAML app using some DirectX rendering in C++ has finally come! Thanks WinRT! :-)

Here I just plan to describe my attempt at learning DirectX and C++ and integrate it nicely in a C# XAML app.

My first exercise was to attempt to create a simple DirectX “Context” as WinRT C++/Cx component that can target multiple DirectX hosts: SwapPanel, CoreWindow, ImageSource and render an independent scene and initialize and use it from C#.

Note this is a metro app. It requires VS2012 and Windows 8.


First the appetizers, here is my simple scene:


And it is created with the code below, mostly one giant C# (5, async inside!) object initializer:

public class Universes
    public async static Task<Universe> CreateUniverse1(DXContext ctxt = null)
        ctxt = ctxt ?? new DXContext();

        var cubetex = await CreateSceneTexture(ctxt);

        var earth = new BasicTexture(ctxt);
        await earth.Load("earth600.jpg");

        var cube = new BasicShape(ctxt);
        var sphere = new BasicShape(ctxt);

        var u = new Universe(ctxt)
            Scene =
                Background = Colors.Aquamarine,
                Camera =
                    EyeLocation = dx.vector3(0, 0.0f, 0.0f),
                    LookDirection = dx.vector3(0, 0, 100),
                    UpDirection = dx.vector3(0, 1, 0),
            Items =
                new SpaceBody(ctxt)
                    FTransform = t => dx.identity().Scale(10, 10, 10).RotationY(36 * t),
                    FLocation = t => dx.vector3(0, 0, 50),
                    SceneItem =
                        Shape = cube,
                        Texture = cubetex,
                new SpaceBody(ctxt)
                    FTransform = t => dx.identity().Scale(8, 6, 8).RotationY(96 * t),
                    FLocation = t => new float3().Translate(15, 0, 0).RotationY(24 * t).Translate(0, 15, 50),
                    SceneItem =
                        Shape = sphere,
                        Texture = earth,
                    Items =
                        new SpaceBody(ctxt)
                            FTransform = t => dx.identity().RotationY(84 * t),
                            FLocation = t => new float3().Translate(12, 0, 0).RotationY(24 * t),
                            SceneItem =
                                Shape = sphere,
                                Texture = earth,
                new SpaceBody(ctxt)
                    FTransform = t => dx.identity().Scale(6, 5, 6).RotationY(48 * t),
                    FLocation = t => new float3().Translate(-15, -15, 55),
                    SceneItem =
                        Shape = sphere,

        return u;

    public async static Task<BasicTexture> CreateSceneTexture(DXContext ctxt)
        var tex = new BasicTexture(ctxt);
        tex.Create(300, 300);

        var scene = new Scene(ctxt);
        scene.Background = Windows.UI.Colors.DarkGoldenrod;
        scene.Add(new DXBase.Scenes.CubeRenderer());
        scene.Add(new DXBase.Scenes.HelloDWrite());
        await scene.LoadAsync().AsTask();
        return tex;

There is much to say about this sample but I won’t go into the detail of DirectX too much (this is a very basic sample as far as DirectX is concerned and the source code is available, at the bottom), instead I will mostly speak about C++/Cx – C# communication.


1. The main DirectX C++/Cx components

1.1. DXContext

First there is the DirectX context, here is an extract of its important methods and properties

  public ref class DXContext sealed :  Windows::UI::Xaml::Data::INotifyPropertyChanged

      // Target the top level CoreWindow
      void SetTarget();
      // Target the argument top level SwapChainBackgroundPanel
      void SetTarget(Windows::UI::Xaml::Controls::SwapChainBackgroundPanel^ swapChainPanel);
      // Target the argument ImageSource
      void SetTarget(Windows::UI::Xaml::Media::Imaging::SurfaceImageSource^ image, int w, int h);
      // Target a texture
      void SetTarget(DXBase::Utils::BasicTexture^ texture);

      property float Dpi;
      property Windows::Foundation::Size Size;
      property Windows::Foundation::Rect Viewport;

      DXBase::Utils::BasicTexture^ Snapshot();

      // internal (shared) DirectX variables
      // device independent resources
      Microsoft::WRL::ComPtr<ID2D1Factory1> m_d2dFactory;
      Microsoft::WRL::ComPtr<IDWriteFactory1> m_dwriteFactory;
      Microsoft::WRL::ComPtr<IWICImagingFactory2> m_wicFactory;

      // device resource
      D3D_FEATURE_LEVEL m_featureLevel;
      Microsoft::WRL::ComPtr<ID3D11Device1> m_d3dDevice;
      Microsoft::WRL::ComPtr<ID3D11DeviceContext1> m_d3dContext;
      Microsoft::WRL::ComPtr<ID2D1Device> m_d2dDevice;
      Microsoft::WRL::ComPtr<ID2D1DeviceContext> m_d2dContext;

      // target and size dependent resources
      DirectX::XMFLOAT4X4 mDisplayOrientation;
      Microsoft::WRL::ComPtr<ID3D11RenderTargetView> m_renderTargetView;
      Microsoft::WRL::ComPtr<ID3D11DepthStencilView> m_depthStencilView;

DXContext is a ‘public ref class’ meaning it’s a shared component (Can be used by C#), it must be sealed (unfortunately… Except those inheriting from DependencyObject, all C++ public ref class must be sealed, as explained here, inheritance section)

All the public members are accessible from C#, the most important are the overloaded “SetTarget()” methods that will set the DirectX Rendering target. Can be changed anytime (although it seems to be an expensive operation, I think rendering on a Texture should probably be done an other way, when I will know better).

Finally it hold all DirectX device information as internal variables. These can’t be public or protected as they are not WinRT component. But, being internal, they can be accessed by other component in the library, it’s how the scene can render. I tried to trim the fat to the minimum number of DirectX variable that such an object should contains.

Note plain C++ doesn’t have the ‘internal’ visibility, this is a C++/Cx extension and it means the same thing as in C#, i.e. members are accessible by all code in the same library.

ComPtr<T> is a shared COM Pointer. Take care of all reference counting for you.

DXContext implements INotifyPropertyChanged and can be observed by XAML component or data binding!
I also created a macro for the INotifyPropertyChanged implementation as it is repetitive and I had to write a long winded implementation due to some mysterious bug in the pure C++ sample.

It has a Snapshot() method to take a screen capture! And BasicTexture have a method to save to file.


1.2 Scene

My first attempt at using this DXContext was to create a Scene object which contains ISceneData object.

An ISceneData can be ripped of, more or less, verbatim from various DirectX sample around the web. And the Scene object will take care of initializing it and rendering it when the time is right. I have 2 ISceneData implementations: CubeRenderer, HelloDWrite.


1.3 BasicScene, BasicShape, BasicTexture

Unfortunately all the sample on the web often have a lot variables, all mixed up and trying to sort out what does what takes some thinking.

So I created a BasicScene which takes a list of shapes with texture and location (transform) and renders it

public ref class BasicSceneItem sealed
    BasicSceneItem(DXContext^ ctxt);
    property DXContext^ Context;
    property DXBase::Utils::BasicShape^ Shape;
    property DXBase::Utils::BasicTexture^ Texture;
    property DXBase::Utils::float4x4 WorldTransform;

public ref class BasicScene sealed
    BasicScene(DXContext^ ctxt);

    property PerformanceTimer^ Timer;
    property Windows::UI::Color Background;
    property DXBase::DXContext^ Context;
    property DXBase::Utils::BasicCamera^ Camera;

    property Windows::Foundation::Collections::IVectorView<BasicSceneItem^>^ Shapes;
    void Add(BasicSceneItem^ item);
    void Remove(BasicSceneItem^ item);
    void RemoveAt(int index);

    property bool IsLoaded;

    void RenderFrame();
    void RenderFrame(SceneRenderArgs^ args);

It also has some Background and a Camera, all WinRT component that can be controlled by C++.

The BasicShape contains point and index buffer for triangles and has various create method that will populate the buffers.

The BasicTexture can load a file or be created directly in memory (and rendered to by using Context.SetTarget(texture)), and contains the texture and textureView used by the rendering process.

Each of these class has very few DirectX specific variables making it relatively easy to understand what’s going on.


2. C++/Cx to C# mapping

When C++/Cx components are called from C#, the .NET runtime does some type mapping for you. There is the obvious, the basic types (int, float, etc..) and value types (struct) are used as is. But there is more, mapping for exception and important interfaces (such as IEnumerable).

It’s worth having a look at this MSDN page which details the various mapping happening.

Also, to refresh my C++ skill I found this interesting web site where most Google query lead to anytime I had a C++ syntax or STL issue!


3. Exception across ABI

You can’t pass custom exception or exception’s message across ABI (C++ / C# / JavaScript boundary). All that can pass is an HRESULT, basically a number. Some special number will pass some special exception as explained on this MSDN page.

If you want to pass some specific exception you have to use some unreserved HRESULT (as described here) and have some helper class to turn the HRESULT in a meaningful number.

Here comes the ExHelper class just for this purpose

// this range is free: 0x0200-0xFFFF
public enum class ErrorCodes;

// You can't throw custom exception with custom message across ABI
// This will help throw custom Exception with known HRESULT value
public ref class ExHelper sealed
    static void Throw(ErrorCodes c);
    static ErrorCodes GetCode(Windows::Foundation::HResult ex);
    static Windows::Foundation::HResult CreateWinRTException(ErrorCodes c);

Note you can’t expose Platform::Exception publicly either (well maybe you can, but it was troublesome). But you can expose an HRESULT. The runtime will automatically turn it into a System.Exception when called from C#.


4. Reference counting and weak pointer

C++/Cx is pure C++. There is no garbage collection happening when writing pure C++ app, even if one use the C++/Cx extension. The hat (^) pointer is a ref counted pointer that can automatically be turned into a C# reference.

That can lead to a problem when 2 C++/Cx components reference each other as in the following (simplified) scenario

public ref class A sealed
    A^ other;
    property A^ Other
        A^ get() { return other; }
        void set(A^ value) { this.other = value; }

    auto a1 = ref new A();
} // a1 is automatically destroyed

    auto a1 = ref new A();
    auto a2 = ref new A();
    a1.Other = a2;
    a2.Other = a1;
} // no automatic destruction takes place!

To solve such problem WinRT comes with a WeakReference. The class A can be modified as follow to not hold strong reference:

public ref class A sealed
    WeakReference other;
    property A^ Other
        A^ get() { return other.Resolve<A>(); }
        void set(A^ value)
            if (value)
                this.other = value;
                this.other = WeakReference();


5. debugging / logging

Sometimes logging is helpful for debugging. For example I log creation and deletion of some items to be sure I don’t have any memory leak. However, printf, cout<<, System::Console::WriteLine won’t work in a metro app.

One has to use OutputDebugString, output will appears in Visual Studio output window.


6. IEnumerable, IList

If you use C# you must love IEnumerable, IEnumerator, IList and LINQ. When writing a C++ component you should make sure it plays nice with all that.

The .NET runtime does some automatic mapping when calling in C++/Cx component, as explained here.

6.1 IEnumerable

In C++ one shall expose Windows::Foundation::Collection::IIterable<T> to be consumed in C# a System.Collections.Generic.IEnumerable<T>.

IIterable has a single method First() that return and IIterator. That will be mapped to an IEnumerator.

However there is a a little gotcha. Unlike C# IEnumerator which starts before the first element (one has to call bool MoveNext()) IIterator starts on the first element.

6.2 IList

One can return an Windows::Foundation::Collections::IVector<T> to be mapped to an IList<T>. There is already a class implementing it:


Or one can use vector->GetView() to return a Windows::Foundation::Collections::IVectorView<T> that will be mapped to an IReadonlyList<T>.


7. Function pointers and lambda

C++ 0x (or whatever is called the latest C++ standard) introduced lambda expression to create inline function, much like in C#.

There is a long description of on MSDN.

Basically it has the following syntax

[capture variable](parameters) –> optional return type specification { body }

It’s all quite intuitive except for the capture part. You have to specify which value you want to capture (this, local variable) and you can specify by value or reference (using the ‘&’ prefix), or all local variables and this with equal as in: ‘[=]’


In some instance I had problem assigning lambda to a function pointer, for example the code below didn’t compile for me (maybe I missed something?)

IAsyncOperation<bool>^ (*func)(Platform::Object^) = [] (Object^ ome) -> IAsyncOperation<bool>^ { ... };

Fortunately C++0x introduce “function object” which works fine

#include <functional>
std::function<IAsyncOperation<bool>^(Object^)> func = [] (Object^ ome) -> IAsyncOperation<bool>^ { ... };

Remark the function object will keep reference to the captured variable as long as it exists! Be careful with circular reference and WinRT component (hat pointers ‘^’).


8. Async

With Metro Async programming is an inescapable reality!

Of course in your C++/Cx code you can use the class from System.Threading.Tasks, but there is also some C++ native API just for that: task<T>.

One can create task from .NET IAsyncOperation or a simple C function:

#include <ppltasks.h>
#include <ppl.h>

using namespace concurrency;
using namespace Windows::Foundation;

bool (*func)() = []()->bool { return true; };
task<bool> t1 = create_task(func);

IAsyncOperation<bool>^ loader = ...;
task<bool> t2 = create_task(loader);

Conversely one can create .NET IAsyncOperation from task or C function with create_async, as in:

#include <ppltasks.h>
#include <ppl.h>

using namespace concurrency;
using namespace Windows::Foundation;

bool (*func)() = []()->bool { return true; };
task<bool> t1 = create_task(func);
IAsyncOperation<bool>^ ao1 = create_async([t1] { return t1; });
IAsyncOperation<bool>^ ao2 = create_async(func);

Tasks can be chained with ‘then’ and one can wait on multiple task by adding them with ‘&&’ such as in:

task<void> t1 = ...
task<void> t2 = ...

auto t3 = (t1 && t2).then([] -> void
    OutputDebugString(L"It is done");

Remark tasks are value type and start executing immediately once created (in another thread).


When chaining tasks with ‘then’ you can capture exception from previous task by taking a task<T> argument instead of T. And put a try/catch around task.get(). If you do not catch exception it will eventually brings the program down.

task<bool> theTask = ....
task<void> task = theTask.then([](concurrency::task<bool> t) -> void
        // success...
    catch (Exception^ ex)
        // failure
        auto msg = "Exception: " + ex->ToString() + "\r\n";


9. Conclusion

It proved pleasantly surprisingly easy to have the C++ and C# works together with WinRT. Smooth and painless. C++ 11 was easier to use that my memory of C++ was telling me. And in the end I mixed and matched them all with great fun. To boot my C# app starts real quick (like a plain C++ app)! It’s way better than C++ CLI!

A few frustrating point with C++/Cx still stands out though:

  • Microsoft value types (Windows::Foundation::Size for example) have custom constructors, methods and operator, yours cannot.
  • You can’t create a type hierarchy! (Can be worked around tediously with an interface hierarchy, but still!)


10. Source code

download it from here!




Tags: , , , ,
Permalink | Comments (0) | Post RSSRSS comment feed

The Content Stream sample

Today I mostly finished porting the Content Stream DirectX sample from MSDN to WPF. About 4300 lines of C++ to C#.

Get it on CodePlex!

I kind of lost track of the big picture while translating C++ to C# method after method, each of them being far remote from doing any DirectX work. Anyhow I did learn a few things…

But first a screen shot of the result: a huge (and empty) free roaming world:



And here are a few things that I learn while porting the sample


Interop lessons

The PackedFile class is reading / writing a lot of structures (directly, as opposed to parsing the bytes) from a terrain file.

Writing a structure to a file in C++ is quite straight forward, define your structure and use WriteFile as in

    UINT64 ChunkOffset;
    UINT64 ChunkSize;

CHUNK_HEADER* pChunkHeader = TempHeaderList.GetAt( i );
if( !WriteFile( hFile, pChunkHeader, sizeof( CHUNK_HEADER ), &dwWritten, NULL ) )
    goto Error;


The same operation can, in fact, be done in C#. Here is a code that will write any structure (property tagged) to a byte[] array (i.e. any stream)

public static T ToValue<T>(byte[] buf, ref int offset)
    where T : struct
    int n = Marshal.SizeOf(typeof(T));
    if (offset < 0 || offset + n > buf.Length)
        throw new ArgumentException();

    fixed (byte* pbuf = &buf[offset])
        var result = (T)Marshal.PtrToStructure((IntPtr)pbuf, typeof(T));
        offset += n;
        return result;

(This method can be found in Week02Samples\BufferEx.cs)

The structure should be properly tagged though. Here is one which contains a fixed size string!

[StructLayout(LayoutKind.Sequential, CharSet = CharSet.Unicode/*, Pack = 4*/)]
public struct FILE_INDEX
    [MarshalAs(UnmanagedType.ByValTStr, SizeConst = WinAPI.MAX_PATH)]
    public string szFileName;
    public long FileSize;
    public long ChunkIndex;
    public long OffsetIntoChunk;
    public Vector3 vCenter;

If all the type are supported by Interop it will end-up with the same content as a C++ reader / writer.


Sharing DirectX texture with CPU memory

The Textures and buffers of DirectX most of the time lies in the GPU memory. Meaning:

  • That they are a precious resource! There is only that much video memory and if the application need more, the GPU will be considerably slowed swapping some of them with the main memory!
  • There are a few key way of passing the memory around:
  • With D3D.Buffer, a call to UpdateSubresource() will do it


With D3D.Texture2D, a mix and match of Map(), write to a Staged resource, Unmap(), CopyResource()

In the samples 2 files help to do that: ResourceReuseCache.cs and ContentLoader.cs


In ResourceReuseCache.cs there is a cache of indices buffers, vertices buffer and textures. 

Of particular interest each texture cache item contains a pair of objects (because updating texture is trickier than buffer): a ShaderResourceView and a (staging) Texture2D.

Here is how they are created:

var desc = new Texture2DDescription
    Width = Width,
    Height = Height,
    MipLevels = MipLevels,
    ArraySize = 1,
    Format = FormatEx.ToDXGI(( D3DFORMAT )Format),
    SampleDescription = new SharpDX.DXGI.SampleDescription(1, 0),
    Usage = ResourceUsage.Default,
    BindFlags = BindFlags.ShaderResource,
    CpuAccessFlags = CpuAccessFlags.None,
    OptionFlags = ResourceOptionFlags.None,

using (Texture2D pTex2D = new Texture2D(m_Device, desc))
    var SRVDesc = new ShaderResourceViewDescription
        Format = desc.Format,
        Dimension = SharpDX.Direct3D.ShaderResourceViewDimension.Texture2D,
        Texture2D = { MipLevels = desc.MipLevels, }
    tex.pRV10 = new ShaderResourceView(m_Device, pTex2D, SRVDesc);

desc.Usage = ResourceUsage.Staging;
desc.BindFlags = BindFlags.None;
desc.CpuAccessFlags = CpuAccessFlags.Write;
tex.pStaging10 = new Texture2D(m_Device, desc);

in maroon bold the 2 paired objects created by this code snipped (note the desc.Usage = ResourceUsage.Staging)


Later data is written to those buffer and communicated to the DirectX memory though IDataLoader(s) and IDataProcessor(s) found in ContentLoader.cs. The loading / updating code being split into 5 methods it might be trick to follow.


For buffer, this method (from DXUtils) show how to write a Stream to a Buffer:

public static void UpdateSubresource(this Direct3D10.Device device, Stream source, Direct3D10.Resource resource, int subresource)
    byte[] buf = new byte[source.Length];
    source.Position = 0;
    source.Read(buf, 0, buf.Length);

    using (var ds = new DataStream(buf, true, true))
        var db = new DataBox(0, 0, ds);
        device.UpdateSubresource(db, resource, subresource);

(note sure what the subresource is for, though…)


For Texture it’s a bit more involved:

void CopyTexture(Stream textdata)
    Device device = ...;
    ShaderResourceView texture = ...;
    Texture2D staging = ...;
var sdata = staging.Map(0, MapMode.Write, MapFlags.None); // WARNING copy should pay attention to row pitch, // i.e. a row length (in byte) might be more than num pixel * pixel size int NumBytes, RowBytes, NumRows; FormatEx.GetSurfaceInfo(250, 250, D3DFORMAT.A8R8G8B8, out NumBytes, out RowBytes, out NumRows); var buff = new BufferEx(); long srcpos = textdata.Position, dstpos = sdata.Data.Position; for (int h = 0; h < NumRows; h++) { textdata.Position = srcpos; sdata.Data.Position = dstpos; buff.CopyMemory(sdata.Data, textdata, RowBytes, buff.CurrentLength); dstpos += m_pLockedRects10[i].Pitch; srcpos += RowBytes; } // send the data to the GPU memory staging.Unmap(0); using (Resource pDest = texture.Resource) device.CopyResource(staging, pDest); }

(again, not sure what the 1st argument of Map() / Unmap() is …)


Rendering pipeline and shader bytecode signature

In Direct3D input data, i.e. the vertices with their (optional) texture coordinate, normal and color go through what’s called a rendering pipeline. Having trouble finding an explanation about it again here is a Wikipedia article about it:


The Microsoft Direct3D 10 API defines a process to convert a group of vertices, textures, buffers, and state into an image on the screen. This process is described as a rendering pipeline with several distinct stages. The different stages of the Direct3D 10 pipeline[29] are:[30]

  1. Input Assembler: Reads in vertex data from an application supplied vertex buffer and feeds them down the pipeline.
  2. Vertex Shader: Performs operations on a single vertex at a time, such as transformations, skinning, or lighting.
  3. Geometry Shader: Processes entire primitives such as triangles, points, or lines. Given a primitive, this stage discards it, or generates one or more new primitives.
  4. Stream Output: Can write out the previous stage's results to memory. This is useful to recirculate data back into the pipeline.
  5. Rasterizer: Converts primitives into pixels, feeding these pixels into the pixel shader. The Rasterizer may also perform other tasks such as clipping what is not visible, or interpolating vertex data into per-pixel data.
  6. Pixel Shader: Determines the final pixel colour to be written to the render target and can also calculate a depth value to be written to the depth buffer.
  7. Output Merger: Merges various types of output data (pixel shader values, alpha blending, depth/stencil...) to build the final result.

The pipeline stages illustrated with a round box are fully programmable. The application provides a shader program that describes the exact operations to be completed for that stage. Many stages are optional and can be disabled altogether.


Another thing I understood is what is this signature thing is all about!

When drawing you should set the input layout of the data. This input layout need some sort of byte code signature, as in:


// initialization
var inputSignature = ShaderSignature.GetInputSignature(pVSBlob);
var layout = new InputLayout(Device, inputSignature, new[]{
    new InputElement("VERTEX", 0, Format.R32G32B32_Float, 0),

// rendering

In here signature is not about signing your code / security. It’s about checking that the InputLayout defined in code matches the input of the vertex shader (i.e. the entry point of the rendering pipeline). It’s why the signature always from the vertex shader definition.



Somehow I found the declaration of the various shaders involved in your rendering pipelines quite cumbersome. Now apparently there is a way to do it all in the HLSL file by using effects. An effect (in your HLSL file) look like that:

technique10 RenderTileDiff10
    pass p0
        SetVertexShader( CompileShader( vs_4_0, VSBasic() ) );
        SetGeometryShader( NULL );
        SetPixelShader( CompileShader( ps_4_0, PSTerrain(false) ) ); 
        SetDepthStencilState( EnableDepth, 0 );
        SetBlendState( NoBlending, float4( 0.0f, 0.0f, 0.0f, 0.0f ), 0xFFFFFFFF );
        SetRasterizerState( CullBack );  

It looks like the C++ / C# code for setting up your pipeline, just much more compact!

Once you created a few effects, to use them you got to: compiler your shader file and get a point to the technique of choice.

To create the layout you will need to get the effect’s vertex shader (for the signature)

Here is some pseudo code that use the above effect and do initialization and rendering

//====== Initialization =============
// compile the shader and get the effect
var sbytecode = ShaderBytecode.CompileFromFile(
    sFlags, EffectFlags.None, null, null);
var myEffect = new Effect(Device, sbcfile);

// get the technique(s) of interest
var myTechnique = myEffect.GetTechniqueByName("RenderTileDiff10");

// define input data layout
var inputdesc = new InputElement[]
    // Lloyd: watch out! trap! offset and slot are swapped between C++ and C#
    new InputElement ( "POSITION", 0, Format.R32G32B32_Float,  0, 0, InputClassification.PerVertexData, 0 ),
    new InputElement ( "NORMAL",   0, Format.R32G32B32_Float, 12, 0, InputClassification.PerVertexData, 0 ),
    new InputElement ( "TEXCOORD", 0, Format.R32G32_Float,    24, 0, InputClassification.PerVertexData, 0 ),
var PassDesc = myTechnique.GetPassByIndex(0).Description;
var vertexsignature = PassDesc.Signature;
var inputlayout = new InputLayout(Device, vertexsignature, inputdesc);

// ======== Rendering ===
// set a shader variable
var mWorld = myEffect.GetVariableByName("g_mView").AsMatrix();

// render with a technique
var Desc = myTechnique.Description;
for (int iPass = 0; iPass < Desc.PassCount; iPass++)
    Device.DrawIndexed(g_Terrain.NumIndices, 0, 0);

Still not sure what the passes are about though.



Direct3D 9, 10, 11

There is 2 sides to Direct3D. There is the runtime API installed on your computer and there is the feature level (as it is called since D3D 10.1) supported by the video card. So while you might have DirectX 11 installed on your system, your video card might only support Direct3D 10.0 perhaps.

One thing with the D3D 10.1 runtime and up (if it’s installed, by your installer for example) is that you can use whatever version of D3D you like, but target (or use) a given feature level. The difference between each feature level is summarized there.


Anyhow I had various problem and success with each version of D3D.

I’m working on those sample at home and everything works fine. At work it doesn’t though, due to my work video card only supporting D3D10 (and maybe some incorrect initialization, hardware testing on my part).

Also, first, to be rendered in D3DImage the render targets should be compatible with D3D9 surface. In the case of D3D 10 and 11 that means they should be defined with ResourceOptionFlags.Shared. But this is not supported by D3D10! (only D3D10.1). It’s hard for me to test as my computer has a D3D11 compatible card, I still have some initialization issue on low end computer for lack of testing machine.

Secondly, while D3D11 include some new amazing features such as computing shader! (talk about parallel processing!), geometry shader with which you can do realistic fur or high performance software renderer the WARP device, it has no support for text and font at all! Although (I have to test) supposedly one can render part of the scene with D3D10 (the text for example) and use the resulting texture in D3D11 directly as the surface have a compatible format.



I learn I need a camera class to describe and manipulate the world, view and projection matrices! I was inspired by DXUTCamera.h and write class very similar to the sample.

Camera has the following interesting methods

public abstract partial class BaseCamera
    public BaseCamera()

public void SetViewParams(Vector3 eye, Vector3 lookAt) public virtual void SetViewParams(Vector3 eye, Vector3 lookAt, Vector3 vUp) public void Reset() public Vector3 Position public Vector3 LookAt public Vector3 Up public Matrix View { get { return mView; } } public void SetProjParams(float fFOV, float fAspect, float fNearPlane, float fFarPlane) public float NearPlane public float FarPlane public float AspectRatio public float FieldOfView public Matrix Projection { get { return mProj; } } public void FrameMove(TimeSpan elapsed) } public partial class FirstPersonCamera : BaseCamera
public partial class ModelViewerCamera : BaseCamera

Remark lookAt is the point the camera is looking at, not the direction it’s gazing at!


It contains the current view and project matrix. Handle key and mouse input by changing the view matrix. It also can change the view matrix with an elapsed time (for changing the view between each frame, when keys are down).

It’s imperfect (I think I will write a better one once I start porting Babylon from XNA to DirectX+WPF) though.

Ha, well, when experimenting with camera I had to read about… quaternions! Which I only feared by name until now.

I won’t say I master quaternion yet! Ho no!

But I understand enough to be dangerous. Here is some good introductory links on Quaternions

Tags: , ,
Categories: .NET | DirectX | WPF
Permalink | Comments (0) | Post RSSRSS comment feed

Introducing DirectX to WPF

I started to learn DirectX. I wanted, of course, to use it in a WPF environment. I don’t hope to write a game (yet?) but I thought it would be a good API for high performance data visualization. Or simply capturing and tweaking web cam output.

I discovered SharpDX by Alexandre Mutel, which is a 100% managed wrapper. Better yet, it performs better than all other managed wrapper it seems! At least according to this this.

To start with DirectX you need to download the DirectX SDK which is good because it contains tons of tutorials and samples.

I started by rewriting all the 7 MSDN tutorials in C#. I try to write code very close to the original C++ (for comparison purpose) yet beautified (thanks to C#!), I shall say it came out well! Smile



Speaking of which, when you work with DirectX you have to write (pixel, vertex, hull, etc…) shaders (introduction). Basically they are little program that convert the data from one graphic processing stage to an other. The shader code look remotely like a simple C file with some extra.

Once again, thanks again to Alexandre Mutel, I found an extension for VS2010 which provide syntax colouring for shaders, NShader.

With that shader program are much easier to read, behold:

struct VS_INPUT
    float4 Pos : POSITION;
    float3 Norm : NORMAL;

struct PS_INPUT
    float4 Pos : SV_POSITION;
    float3 Norm : TEXCOORD0;

// Vertex Shader
    PS_INPUT output = (PS_INPUT)0;
    output.Pos = mul( input.Pos, World );
    output.Pos = mul( output.Pos, View );
    output.Pos = mul( output.Pos, Projection );
    output.Norm = mul( input.Norm, World );
    return output;


So, how does this all work?


From WPF’s point of view, the DirectX code is to be rendered into a Texture2D and the Texture2D to be displayed with a D3DImage.

It starts with:

public class DXImageSource : D3DImage, IDisposable
    public void Invalidate()

    public void SetBackBuffer(SharpDX.Direct3D10.Texture2D texture)
    public void SetBackBuffer(SharpDX.Direct3D11.Texture2D texture)
    public void SetBackBuffer(SharpDX.Direct3D9.Texture texture)

With this subclass of D3DImage you can directly set a SharpDX / DirectX Texture2D as the back buffer of the image (Remark that they should be created with ResourceOptionFlags.Shared, as they will be access by the D3DImage through a shared D3D9 interface).

This ImageSource could very well be used in a Image class. But to provide continuous updating, resizing, etc.. I created the following FrameworkElement:

public interface IDirect3D
    void Reset(ResetArgs args);
    void Render(RenderArgs args);

public class DXElement : FrameworkElement
{ public DXImageSource Surface { get; } public IDirect3D Renderer public bool IsLoopRendering }

Then the DXElement does very little by itself. It handles resize event. If IsLoopRendering is true it renders its Renderer every frame. It capture mouse and forward to the Render if it implements IInteractiveRenderer (which D3D does).


And that’s it for the UI.


From the DirectX point of view I provide a few class (The D3D tree) that just create an appropriate Device and have virtual method to override to render.


What’s next?

  • Direct2D1 only works with Direct3D10 I’d like to make it works with Direct3D11
  • There are still many thing I’d like to know to be reasonably confident, so I (just) started to implement various sample which will show some interesting aspect of DirectX. (1 completed so far)

Could you get the code?
well.. it’s all on CodePlex!

Tags: ,
Categories: DirectX | WPF
Permalink | Comments (0) | Post RSSRSS comment feed