Programing experiments

Quick Visual D update

A few last precision before launching into D coding.


Visual D configuration

It turns out that, if you have a few project which depends on each other, as in:



You don’t need to set the library path, include path, etc.… Visual D will take care of that for you!



Ha well, there is one last thing that is needed before getting started coding D like crazy, it’s good documentation!

The Phobos documentation can be found here.

Well you also need to read about D language syntax! A web exhaustive description can be found there. I should say for such task I like to have a book. I bought this one (there are not that many! ^_^).

I bought the kindle version, then I though better of it (Kindle book are not good at being read back and forth and jumping chapter like crazy, as is needed in a learning process!) and then the paper back version! I left a bad comment because it didn’t answer the question how to compile the damn thing! (which, if you have read my D post so far, we figured out by now!) but I like it, sorry about the comment.


Ho, almost missed that, when you “installed” D (unzipped it!) you also installed the offline documentation!
Check out (on your disk)



Statement of the day

Thanks to the paper version of the D language I was able to browse through all 350 pages of it in an hour (it’s just a book about syntax hey!). I discover a really interesting and innovative statement, a “using” on steroid, scope()

exert from the web site:

Scope Guard Statement

	scope(exit) NonEmptyOrScopeBlockStatement
	scope(success) NonEmptyOrScopeBlockStatement
	scope(failure) NonEmptyOrScopeBlockStatement
The ScopeGuardStatement executes NonEmptyOrScopeBlockStatement at the close of the current scope, rather than at the point where theScopeGuardStatement appears. scope(exit) executes NonEmptyOrScopeBlockStatement when the scope exits normally or when it exits due to exception unwinding. scope(failure) executes NonEmptyOrScopeBlockStatement when the scope exits due to exception unwinding. scope(success)executes NonEmptyOrScopeBlockStatement when the scope exits normally.

If there are multiple ScopeGuardStatements in a scope, they are executed in the reverse lexical order in which they appear. If any scope instances are to be destructed upon the close of the scope, they also are interleaved with the ScopeGuardStatements in the reverse lexical order in which they appear.

    scope(exit) writef("3");
    scope(exit) writef("4");


Of course there are many more interesting stuff in D, just picking! Have a Go Smile with tongue out! Pun intended!

Categories: D | Visual Studio
Permalink | Comments (0) | Post RSSRSS comment feed

Bulk SQL Update and Delete with LINQ to EF

The more I use EF to SQL the more I like it and get rid of stored procedures. It really ease maintenance and development!

There is one major area where EF is lacking though, and stored proc still absolutely required, and it is bulk update and delete. Wouldn’t it be nice to be able to do that with just some strongly typed nice C# code?

Come to think about it, it might be possible… and Google helping I found a blog which provided an implementation for just that!

Just to be safe I downloaded his C# source code file and provide it here as well (untested and unread yet… will do that tomorrow at work…)

Tags: ,
Categories: .NET | Database
Permalink | Comments (0) | Post RSSRSS comment feed

The Content Stream sample

Today I mostly finished porting the Content Stream DirectX sample from MSDN to WPF. About 4300 lines of C++ to C#.

Get it on CodePlex!

I kind of lost track of the big picture while translating C++ to C# method after method, each of them being far remote from doing any DirectX work. Anyhow I did learn a few things…

But first a screen shot of the result: a huge (and empty) free roaming world:



And here are a few things that I learn while porting the sample


Interop lessons

The PackedFile class is reading / writing a lot of structures (directly, as opposed to parsing the bytes) from a terrain file.

Writing a structure to a file in C++ is quite straight forward, define your structure and use WriteFile as in

    UINT64 ChunkOffset;
    UINT64 ChunkSize;

CHUNK_HEADER* pChunkHeader = TempHeaderList.GetAt( i );
if( !WriteFile( hFile, pChunkHeader, sizeof( CHUNK_HEADER ), &dwWritten, NULL ) )
    goto Error;


The same operation can, in fact, be done in C#. Here is a code that will write any structure (property tagged) to a byte[] array (i.e. any stream)

public static T ToValue<T>(byte[] buf, ref int offset)
    where T : struct
    int n = Marshal.SizeOf(typeof(T));
    if (offset < 0 || offset + n > buf.Length)
        throw new ArgumentException();

    fixed (byte* pbuf = &buf[offset])
        var result = (T)Marshal.PtrToStructure((IntPtr)pbuf, typeof(T));
        offset += n;
        return result;

(This method can be found in Week02Samples\BufferEx.cs)

The structure should be properly tagged though. Here is one which contains a fixed size string!

[StructLayout(LayoutKind.Sequential, CharSet = CharSet.Unicode/*, Pack = 4*/)]
public struct FILE_INDEX
    [MarshalAs(UnmanagedType.ByValTStr, SizeConst = WinAPI.MAX_PATH)]
    public string szFileName;
    public long FileSize;
    public long ChunkIndex;
    public long OffsetIntoChunk;
    public Vector3 vCenter;

If all the type are supported by Interop it will end-up with the same content as a C++ reader / writer.


Sharing DirectX texture with CPU memory

The Textures and buffers of DirectX most of the time lies in the GPU memory. Meaning:

  • That they are a precious resource! There is only that much video memory and if the application need more, the GPU will be considerably slowed swapping some of them with the main memory!
  • There are a few key way of passing the memory around:
  • With D3D.Buffer, a call to UpdateSubresource() will do it


With D3D.Texture2D, a mix and match of Map(), write to a Staged resource, Unmap(), CopyResource()

In the samples 2 files help to do that: ResourceReuseCache.cs and ContentLoader.cs


In ResourceReuseCache.cs there is a cache of indices buffers, vertices buffer and textures. 

Of particular interest each texture cache item contains a pair of objects (because updating texture is trickier than buffer): a ShaderResourceView and a (staging) Texture2D.

Here is how they are created:

var desc = new Texture2DDescription
    Width = Width,
    Height = Height,
    MipLevels = MipLevels,
    ArraySize = 1,
    Format = FormatEx.ToDXGI(( D3DFORMAT )Format),
    SampleDescription = new SharpDX.DXGI.SampleDescription(1, 0),
    Usage = ResourceUsage.Default,
    BindFlags = BindFlags.ShaderResource,
    CpuAccessFlags = CpuAccessFlags.None,
    OptionFlags = ResourceOptionFlags.None,

using (Texture2D pTex2D = new Texture2D(m_Device, desc))
    var SRVDesc = new ShaderResourceViewDescription
        Format = desc.Format,
        Dimension = SharpDX.Direct3D.ShaderResourceViewDimension.Texture2D,
        Texture2D = { MipLevels = desc.MipLevels, }
    tex.pRV10 = new ShaderResourceView(m_Device, pTex2D, SRVDesc);

desc.Usage = ResourceUsage.Staging;
desc.BindFlags = BindFlags.None;
desc.CpuAccessFlags = CpuAccessFlags.Write;
tex.pStaging10 = new Texture2D(m_Device, desc);

in maroon bold the 2 paired objects created by this code snipped (note the desc.Usage = ResourceUsage.Staging)


Later data is written to those buffer and communicated to the DirectX memory though IDataLoader(s) and IDataProcessor(s) found in ContentLoader.cs. The loading / updating code being split into 5 methods it might be trick to follow.


For buffer, this method (from DXUtils) show how to write a Stream to a Buffer:

public static void UpdateSubresource(this Direct3D10.Device device, Stream source, Direct3D10.Resource resource, int subresource)
    byte[] buf = new byte[source.Length];
    source.Position = 0;
    source.Read(buf, 0, buf.Length);

    using (var ds = new DataStream(buf, true, true))
        var db = new DataBox(0, 0, ds);
        device.UpdateSubresource(db, resource, subresource);

(note sure what the subresource is for, though…)


For Texture it’s a bit more involved:

void CopyTexture(Stream textdata)
    Device device = ...;
    ShaderResourceView texture = ...;
    Texture2D staging = ...;
var sdata = staging.Map(0, MapMode.Write, MapFlags.None); // WARNING copy should pay attention to row pitch, // i.e. a row length (in byte) might be more than num pixel * pixel size int NumBytes, RowBytes, NumRows; FormatEx.GetSurfaceInfo(250, 250, D3DFORMAT.A8R8G8B8, out NumBytes, out RowBytes, out NumRows); var buff = new BufferEx(); long srcpos = textdata.Position, dstpos = sdata.Data.Position; for (int h = 0; h < NumRows; h++) { textdata.Position = srcpos; sdata.Data.Position = dstpos; buff.CopyMemory(sdata.Data, textdata, RowBytes, buff.CurrentLength); dstpos += m_pLockedRects10[i].Pitch; srcpos += RowBytes; } // send the data to the GPU memory staging.Unmap(0); using (Resource pDest = texture.Resource) device.CopyResource(staging, pDest); }

(again, not sure what the 1st argument of Map() / Unmap() is …)


Rendering pipeline and shader bytecode signature

In Direct3D input data, i.e. the vertices with their (optional) texture coordinate, normal and color go through what’s called a rendering pipeline. Having trouble finding an explanation about it again here is a Wikipedia article about it:


The Microsoft Direct3D 10 API defines a process to convert a group of vertices, textures, buffers, and state into an image on the screen. This process is described as a rendering pipeline with several distinct stages. The different stages of the Direct3D 10 pipeline[29] are:[30]

  1. Input Assembler: Reads in vertex data from an application supplied vertex buffer and feeds them down the pipeline.
  2. Vertex Shader: Performs operations on a single vertex at a time, such as transformations, skinning, or lighting.
  3. Geometry Shader: Processes entire primitives such as triangles, points, or lines. Given a primitive, this stage discards it, or generates one or more new primitives.
  4. Stream Output: Can write out the previous stage's results to memory. This is useful to recirculate data back into the pipeline.
  5. Rasterizer: Converts primitives into pixels, feeding these pixels into the pixel shader. The Rasterizer may also perform other tasks such as clipping what is not visible, or interpolating vertex data into per-pixel data.
  6. Pixel Shader: Determines the final pixel colour to be written to the render target and can also calculate a depth value to be written to the depth buffer.
  7. Output Merger: Merges various types of output data (pixel shader values, alpha blending, depth/stencil...) to build the final result.

The pipeline stages illustrated with a round box are fully programmable. The application provides a shader program that describes the exact operations to be completed for that stage. Many stages are optional and can be disabled altogether.


Another thing I understood is what is this signature thing is all about!

When drawing you should set the input layout of the data. This input layout need some sort of byte code signature, as in:


// initialization
var inputSignature = ShaderSignature.GetInputSignature(pVSBlob);
var layout = new InputLayout(Device, inputSignature, new[]{
    new InputElement("VERTEX", 0, Format.R32G32B32_Float, 0),

// rendering

In here signature is not about signing your code / security. It’s about checking that the InputLayout defined in code matches the input of the vertex shader (i.e. the entry point of the rendering pipeline). It’s why the signature always from the vertex shader definition.



Somehow I found the declaration of the various shaders involved in your rendering pipelines quite cumbersome. Now apparently there is a way to do it all in the HLSL file by using effects. An effect (in your HLSL file) look like that:

technique10 RenderTileDiff10
    pass p0
        SetVertexShader( CompileShader( vs_4_0, VSBasic() ) );
        SetGeometryShader( NULL );
        SetPixelShader( CompileShader( ps_4_0, PSTerrain(false) ) ); 
        SetDepthStencilState( EnableDepth, 0 );
        SetBlendState( NoBlending, float4( 0.0f, 0.0f, 0.0f, 0.0f ), 0xFFFFFFFF );
        SetRasterizerState( CullBack );  

It looks like the C++ / C# code for setting up your pipeline, just much more compact!

Once you created a few effects, to use them you got to: compiler your shader file and get a point to the technique of choice.

To create the layout you will need to get the effect’s vertex shader (for the signature)

Here is some pseudo code that use the above effect and do initialization and rendering

//====== Initialization =============
// compile the shader and get the effect
var sbytecode = ShaderBytecode.CompileFromFile(
    sFlags, EffectFlags.None, null, null);
var myEffect = new Effect(Device, sbcfile);

// get the technique(s) of interest
var myTechnique = myEffect.GetTechniqueByName("RenderTileDiff10");

// define input data layout
var inputdesc = new InputElement[]
    // Lloyd: watch out! trap! offset and slot are swapped between C++ and C#
    new InputElement ( "POSITION", 0, Format.R32G32B32_Float,  0, 0, InputClassification.PerVertexData, 0 ),
    new InputElement ( "NORMAL",   0, Format.R32G32B32_Float, 12, 0, InputClassification.PerVertexData, 0 ),
    new InputElement ( "TEXCOORD", 0, Format.R32G32_Float,    24, 0, InputClassification.PerVertexData, 0 ),
var PassDesc = myTechnique.GetPassByIndex(0).Description;
var vertexsignature = PassDesc.Signature;
var inputlayout = new InputLayout(Device, vertexsignature, inputdesc);

// ======== Rendering ===
// set a shader variable
var mWorld = myEffect.GetVariableByName("g_mView").AsMatrix();

// render with a technique
var Desc = myTechnique.Description;
for (int iPass = 0; iPass < Desc.PassCount; iPass++)
    Device.DrawIndexed(g_Terrain.NumIndices, 0, 0);

Still not sure what the passes are about though.



Direct3D 9, 10, 11

There is 2 sides to Direct3D. There is the runtime API installed on your computer and there is the feature level (as it is called since D3D 10.1) supported by the video card. So while you might have DirectX 11 installed on your system, your video card might only support Direct3D 10.0 perhaps.

One thing with the D3D 10.1 runtime and up (if it’s installed, by your installer for example) is that you can use whatever version of D3D you like, but target (or use) a given feature level. The difference between each feature level is summarized there.


Anyhow I had various problem and success with each version of D3D.

I’m working on those sample at home and everything works fine. At work it doesn’t though, due to my work video card only supporting D3D10 (and maybe some incorrect initialization, hardware testing on my part).

Also, first, to be rendered in D3DImage the render targets should be compatible with D3D9 surface. In the case of D3D 10 and 11 that means they should be defined with ResourceOptionFlags.Shared. But this is not supported by D3D10! (only D3D10.1). It’s hard for me to test as my computer has a D3D11 compatible card, I still have some initialization issue on low end computer for lack of testing machine.

Secondly, while D3D11 include some new amazing features such as computing shader! (talk about parallel processing!), geometry shader with which you can do realistic fur or high performance software renderer the WARP device, it has no support for text and font at all! Although (I have to test) supposedly one can render part of the scene with D3D10 (the text for example) and use the resulting texture in D3D11 directly as the surface have a compatible format.



I learn I need a camera class to describe and manipulate the world, view and projection matrices! I was inspired by DXUTCamera.h and write class very similar to the sample.

Camera has the following interesting methods

public abstract partial class BaseCamera
    public BaseCamera()

public void SetViewParams(Vector3 eye, Vector3 lookAt) public virtual void SetViewParams(Vector3 eye, Vector3 lookAt, Vector3 vUp) public void Reset() public Vector3 Position public Vector3 LookAt public Vector3 Up public Matrix View { get { return mView; } } public void SetProjParams(float fFOV, float fAspect, float fNearPlane, float fFarPlane) public float NearPlane public float FarPlane public float AspectRatio public float FieldOfView public Matrix Projection { get { return mProj; } } public void FrameMove(TimeSpan elapsed) } public partial class FirstPersonCamera : BaseCamera
public partial class ModelViewerCamera : BaseCamera

Remark lookAt is the point the camera is looking at, not the direction it’s gazing at!


It contains the current view and project matrix. Handle key and mouse input by changing the view matrix. It also can change the view matrix with an elapsed time (for changing the view between each frame, when keys are down).

It’s imperfect (I think I will write a better one once I start porting Babylon from XNA to DirectX+WPF) though.

Ha, well, when experimenting with camera I had to read about… quaternions! Which I only feared by name until now.

I won’t say I master quaternion yet! Ho no!

But I understand enough to be dangerous. Here is some good introductory links on Quaternions

Tags: , ,
Categories: .NET | DirectX | WPF
Permalink | Comments (0) | Post RSSRSS comment feed

Introduction to COM

Found this excellent introduction to COM by Jeremiah Morill on his blog there.

Here is the introduction (follow the link for the full post!)

To some .NET developers, COM is a dirty little turd that no matter how hard they try, it won’t flush.   Microsoft and other vendors keep on pumping out new COM based SDKs year after year.  Sometimes these APIs are directly consumable by .NET and in the case they are not, you can visibly see developers’ anxiety rise at the thought of working directly with COM.  There’s a lot of beef with a lot of developers about COM.  Where does it all come from?


“I did XYZ in COM and I couldn’t make it work, so COM sucks”, “I used ABC COM technology and it was way too overcomplicated” and the variations are things that have been heard and are wide spread.  While the fact that so many developers have these grievances about COM is generally relevant, it is also not very fair.  Imagine looking at .NET for the first time and diving right into WCF or Workflow and saying “I got burned by it, so .NET blows”.

… (continued on Jeremiah’s blog)…

Tags: ,
Categories: .NET | link
Permalink | Comments (0) | Post RSSRSS comment feed

Discovering D and Visual Studio (continued…)

Thanks for the feedback from the previous article, I know now that:

  • DFL should work with the BCL in D2, but it just doesn’t at the moment, due to some repository snafu…
  • Visual D had building problem due to… tool chain issues! This page about Visual D known issues explain what’s going on and how to fix it!

Hence by modifying C:\D\dmd2\windows\bin\sc.ini

To be like (changes in maroon)

version=7.51 Build 020

DFLAGS="-I%@P%\..\..\src\phobos" "-I%@P%\..\..\src\druntime\import"


I was able to make do with a much more satisfactory config:



Community support

Maybe I should mention that D, despite being a marginal language (as in: little know) has a surprisingly pleasantly vibrant community support.

Also the Digitalmars page on D is full of interesting link (particularly Tools => More Tools) but I’d like to grab the attention on 2 communities:

  • D Source forums
  • Digitalmars newsgroups (news server: news.digitalmars.com). Can be accessed with Windows Live Mail or Thunderbird for example. Just create a new newsgroup account



Initial sample reloaded

I though it might be a good idea to have DGui in my projects (i.e. as source code / solution’s project) and link my test project against it. This way I can look, play with and learn from some D code written by a more knowledgeable D programmer than I!

It turns out there are key difference with what would happen with trying to do that with C#. Perhaps it behave like it will with Visual C++, I couldn’t tell, not having using it enough…

At any rate here is what happened

  • I created a new Visual D static library
  • Created the directory hierarchy of the DGui source (it’s important for D, just like Java, it reflects package organisation)
  • Added the existing file from DGui
  • And build, successfully first try! (Nicely enough DGui has no other dependency that Phobos / BCL)


Now, when looking at the file system, there was no D file in my project’s directory! All these files in the solution explorer linked to the one and only original files.

Hence, I shouldn’t change the import file path of my test project. For the record they remained pointing to




Updating the dependencies

But I still needed to point to my build version of DGui, hence change the library path of my test project. And I also need to set my test project to depends on my (local) dgui project.

Right click on project => Project dependencies => select dgui


image image

Set the library search path to the project build location: C:\Dev\DTest\dgui\Debug.


F5, it builds and run!

Now for the surprisingly good part, in my test app, if I F12 on one of the class I use, VS actually goes there!

But no method list (yet?).

Just for fun I added a “std.stdio.writeln("hello");” in initialization looking method (I still knows nothing of D, hey!) and .. it printed!


Getting rid of the console

I’m trying a GUI sample, yet I have a nagging MS-Dos console appearing every time I run. To get rid of it I should pass a special flags to the linker (in Additional options) so it will mark the executable as being a windows application (i.e. no console attached!)





A brief look at the source code

So far I haven’t coded anything yet! Just wanted to feel comfortable with the development experience… But will post about source code next time!

Anyhow, below is the source code of my test app (straight from events.d in DGui’s samples):

module events;

import dgui.all;

class MainForm: Form
    private Button _btnOk;
    public this()
        this.text = "DGui Events";
        this.size = Size(300, 250);
        this.startPosition = FormStartPosition.CENTER_SCREEN; // Set Form Position
        this._btnOk = new Button();
        this._btnOk.text = "Click Me!";
        this._btnOk.dock = DockStyle.FILL; // Fill the whole form area
        this._btnOk.parent = this;
        this._btnOk.click.attach(&this.onBtnOkClick); //Attach the click event with the selected procedure
    private void onBtnOkClick(Control sender, EventArgs e)
        // Display a message box
        MsgBox.show("OnClick", "Button.onClick()");

int main(string[] args)
    return Application.run(new MainForm()); // Start the application

The exciting things I can see from a quick look at the source is that D supports properties, events and DGui looks quite similar to WinForm!

Except for the lack (at the moment) of designer Sad smile 

Ho well, I just plan to make simple wizards anyway…

Categories: D | Source Code
Permalink | Comments (0) | Post RSSRSS comment feed

D for .NET programmer

Recently it appeared to me that, with D, I could finally write solve a long standing problem, that is write a good advanced installer.

What appealed to me where the following features:

  • Statically linked. Produce an exe with no dependency! (Save for win32 that is, fair enough!)
  • Elegant syntax on par with C# and Java
  • Compile time executable code, making it easy to include “resource” files (zip and include file to install inside the installer)
  • Finally with a nice IDE and GUI library (almost as good as a slim WinForm)
  • Can directly call into C (I plan to call into the MsiXXX function) at no cost!

So I started to try to use it.

Ho boy, just having 1 program, using 1 custom library to compile was quite an odyssey! Hopefully this blog entry could save future language explorer!


1. Installation

First thing first, you’ll need to download and installer the D SDK (the compiler, linker, runtime library, etc… The whole shebang!) That can be found there:


Unzip the latest version of DMD somewhere, you are done! Now that was easy.

And if you heard about DM / DMC, don’t worry about it. It was for D1, just ignore it.


2. IDE

Well, arguably you can go with the command line. The compiler flags and linker flags are simple enough. Yet I’m too spoiled by VS, I need my IDE!
In fact, on the D2 page, following the Editor link on the left (in Tools), I found 3 which were attractive: D-IDE, Entice and Visual D.

D-IDE is cool because it’s C# but that’s about it (it’s still basic and buggy, sadly).

Entice is a GUI designer. It was exciting at first, but I deleted it in the end because it is used for DFL or DWT which both requires to overwrite the base runtime library (aka Phobos). Which I don’t quite feel like doing yet.

And, finally, Visual D, a plugin for Visual Studio. Create D projects in Visual Studio, yoohoo! Download and install it now.



3. My first program

You can create a new module with Visual D (new D project), press F5, and voila, hello D world!


Now I wanted to create a GUI Hello world!

I had a look at DFL and DWT. DFL looks lighter / smaller and good enough for my need. Yet they both required Tango, which seem to be a popular D BCL (aka Phobos) replacement. Well that ended it for me. Tango might be popular but I’m not going to replace my BCL when I can’t even compile a(n advanced) program yet (you’ll see my library trouble next).

Finally I found DGui. Looking like WinForm. Depending just on Phobos (the BCL).

I downloaded, unzipped, copied a sample (in the sample directory) into my “hello.d” file and tried to compile.

Now the problems started…

First error would be:

Error    1    Error: module all is in file 'dgui\all.d' which cannot be read    C:\Dev\DTest\DTest1\hello.d    3


This is due to the import statement at the top:

import dgui.all;


3.a. file and hierarchy structure

D has 2 ‘code unit’ if I may call it that way. Module and package. It’s a bit like classes and packages in Java. A module is everything in one file. I’m not yet sure if the module should be named after the file it’s in, but that, at least, seems to be the convention.

The package is a folder of source code.

When I wrote “import dgui.all” the compiler looked for the module “all” (i.e. the file “all.d”) in the package (i.e. directory) “dgui”.

I need to specify the search path for those modules / packages for this to work. Go to project properties => Configuration => DMD and add the path to the source of the library.


Press F5, now we get an other error! (i.e. we progressed!…)


3.b. declaration file

So, what’s going on? And why does it look for a ‘.d’ file (a source file! I want to use the library not recompile it!)

Here is what happen, the compiler needs some declaration to describe the function that are going to be called. They could be defined inline in the program (if calling into Win32 from D for example, much like DllImport in C#), but more generally you it will look at declaration file defined by the library. There are 2 options there:

  1. Unlike C, where the developer should maintain and synchronise a definition (header / .h) and an implementation (.c /.cpp) file, D can use a single implementation file for both purpose. Hence it will look in the original D source file.
  2. If there is a need to protect some intellectual property, the developer can an generate an ‘interface file’ when compiling (ordinary D file with the ‘.di’ extension and only declaration inside) and deploy these instead of the source code.


3.c. linking

Much like C, C++ and other native environment, compiling D is a 2 stage program. First compiling the source files into object files (.o) and then linking all those files into various target (library (.lib or .dll), executable (.exe)).

But it’s all done on the key stroke of F5, hence a catch all usage meaning of compile for both compiling (making object files) and linking (creating the final output).

When I ‘compile’ now I get those kind of errors:

Error    1    Error 42: Symbol Undefined _D4dgui4form4Form13startPositionMFNdE4dgui4core5enums17FormStartPositionZv    C:\Dev\DTest\DTest1\   
Error    2    Error 42: Symbol Undefined _D4dgui7control7Control4sizeMFNdS4dgui4core8geometry4SizeZv    C:\Dev\DTest\DTest1\   
Error    3    Error 42: Symbol Undefined _D4dgui6button6Button6__ctorMFZC4dgui6button6Button    C:\Dev\DTest\DTest1\   

And so on.

'Symbol alien_looking_name’ generally mean a linker problem. It just happened that Visual D create a “buildlog.html” as well as a “$(project).build.cmd” file in the output folder. So you can see what command it ran to compile and the link the program.

Now I guess I need to include dgui.lib in my project. So go to project properties => linker and set the libraries  and search path.


Now it still doesn’t compile and give me this warning:

Warning    1    Warning 2: File Not Found dgui.lib    C:\Dev\DTest\DTest1\


I can see in the cmd file or build log that it build with the following command

Command Line set PATH=C:\D\dmd2\windows\bin;C:\Program Files (x86)\Microsoft SDKs\Windows\v7.0A\\bin;%PATH% set DMD_LIB=;C:\D\dgui\lib dmd …

Now I can see in D Linker’s documentation that DMD_LIB is not used, it is LIB that is used. I guess it’s a little bug with Visual D (maybe it was made for D1?) at any rate, I solved it by setting the whole path to the DGui library!


F5… build succeeded!


4. Bonus, compile DGui

Well, one could add all the file of DGui in Visual D I guess. But DGui comes with a “dgui_build_rel.bat”, I should use it, don’t you think?

Well all I had to do was to add the path to dmd.exe (i.e. C:\D\dmd2\windows\bin\) to the PATH environment variable.

As found in System Property => Advance System Settings => Advanced => Environment Variables…


Now I can just click on the bat file and… DGui build successfully, that was easy! Smile


5. Done

That’s it, I show how to install D, install a library and compile a program using it!

Categories: D | Source Code | Visual Studio
Permalink | Comments (0) | Post RSSRSS comment feed

Introducing DirectX to WPF

I started to learn DirectX. I wanted, of course, to use it in a WPF environment. I don’t hope to write a game (yet?) but I thought it would be a good API for high performance data visualization. Or simply capturing and tweaking web cam output.

I discovered SharpDX by Alexandre Mutel, which is a 100% managed wrapper. Better yet, it performs better than all other managed wrapper it seems! At least according to this this.

To start with DirectX you need to download the DirectX SDK which is good because it contains tons of tutorials and samples.

I started by rewriting all the 7 MSDN tutorials in C#. I try to write code very close to the original C++ (for comparison purpose) yet beautified (thanks to C#!), I shall say it came out well! Smile



Speaking of which, when you work with DirectX you have to write (pixel, vertex, hull, etc…) shaders (introduction). Basically they are little program that convert the data from one graphic processing stage to an other. The shader code look remotely like a simple C file with some extra.

Once again, thanks again to Alexandre Mutel, I found an extension for VS2010 which provide syntax colouring for shaders, NShader.

With that shader program are much easier to read, behold:

struct VS_INPUT
    float4 Pos : POSITION;
    float3 Norm : NORMAL;

struct PS_INPUT
    float4 Pos : SV_POSITION;
    float3 Norm : TEXCOORD0;

// Vertex Shader
    PS_INPUT output = (PS_INPUT)0;
    output.Pos = mul( input.Pos, World );
    output.Pos = mul( output.Pos, View );
    output.Pos = mul( output.Pos, Projection );
    output.Norm = mul( input.Norm, World );
    return output;


So, how does this all work?


From WPF’s point of view, the DirectX code is to be rendered into a Texture2D and the Texture2D to be displayed with a D3DImage.

It starts with:

public class DXImageSource : D3DImage, IDisposable
    public void Invalidate()

    public void SetBackBuffer(SharpDX.Direct3D10.Texture2D texture)
    public void SetBackBuffer(SharpDX.Direct3D11.Texture2D texture)
    public void SetBackBuffer(SharpDX.Direct3D9.Texture texture)

With this subclass of D3DImage you can directly set a SharpDX / DirectX Texture2D as the back buffer of the image (Remark that they should be created with ResourceOptionFlags.Shared, as they will be access by the D3DImage through a shared D3D9 interface).

This ImageSource could very well be used in a Image class. But to provide continuous updating, resizing, etc.. I created the following FrameworkElement:

public interface IDirect3D
    void Reset(ResetArgs args);
    void Render(RenderArgs args);

public class DXElement : FrameworkElement
{ public DXImageSource Surface { get; } public IDirect3D Renderer public bool IsLoopRendering }

Then the DXElement does very little by itself. It handles resize event. If IsLoopRendering is true it renders its Renderer every frame. It capture mouse and forward to the Render if it implements IInteractiveRenderer (which D3D does).


And that’s it for the UI.


From the DirectX point of view I provide a few class (The D3D tree) that just create an appropriate Device and have virtual method to override to render.


What’s next?

  • Direct2D1 only works with Direct3D10 I’d like to make it works with Direct3D11
  • There are still many thing I’d like to know to be reasonably confident, so I (just) started to implement various sample which will show some interesting aspect of DirectX. (1 completed so far)

Could you get the code?
well.. it’s all on CodePlex!

Tags: ,
Categories: DirectX | WPF
Permalink | Comments (0) | Post RSSRSS comment feed

Composite Application Reloaded



As I was starting a new project I was looking for a library that will help me write a composite application, i.e. an application with a main shell (window) and pluggable extensions (DLLs / modules) to be added to it dynamically; A library like Prism but hopefully much simpler.

Many pieces of the puzzle could already be found elsewhere. The application had to have a clear separation between data and view, i.e. an MVVM approach. Services had to be linked automatically with something like MEF. Data validation should be automatic (thanks to ValidationAttribute(s)).

But improvement had to be made regarding disconnected messaging and view resolution. About the view resolution, i.e. the process of finding the appropriate view to represent a given business data object, I wanted to just tag the view with an attribute, such as DataView(typeof(BusinessData1)) and let the library take care of the rest. This is where this library came from.

Table of content

About MEF and MVVM
What does a composite application look like
Main Features
    Composition GetView
    Validation and ViewModelBase
    Disconnected Messaging
Other Features



To test if my library was up to its goal I have ported three samples to it. In all case I was able to reduce the application size and maintain functionality.

  • Josh Smith MVVM Demo. This is the best sample, as it is small and simple yet it covers almost all features of the library (after some modifications) and is a real composite application. I was able to get rid of the hand written validation code and use ValidationAttribute instead. And I tweaked the MainWindow and App class to make it a composite application, and use the DataControl in the TabItem to bind multiple controls to the same model with different view.
  • Prism’s main sample, the StockTraderApp project (huge sample). I removed the presenters (code which were used to bind views and view models, now replaced with call to Composition.GetView() and DataControl), the EventAggregator and custom prism events (replaced by Notifications static method). The most challenging and interesting part was to get rid of the RegionManager and replace it with the IShellView which explicitly expose the area of the shell that can be used and get rid of the RegionManager’s magic string approach.
  • MEFedMVVM library demo. The application is relatively simple but it makes extensive usage of design time support, and the design time experience is a joy to behold.
The unit tests illustrate how the most important features are working (i.e. Composition, Notification, ViewModelBase and Command).

About MEF and MVVM

Josh Smith talked about MVVM extensively on MSDN already. But, to summarize, MVVM is a View Model approach where all the logic and information is in the models. And by all I mean all, to the extent of including the selected element of a list, or the position of the caret, if need be.

In MVVM the view is nothing more than some declarative XAML (and possibly some UI specific code if need be, with no business code at all just pure UI logic). And because business data might not express all the information present in a view (such as selected area, incorrect value in a text box, etc...), business models might be wrapped in view models. View models are business models wrappers with a few extra, view-friendly, properties. This offers various advantages including increased testability, better separation of concerns, possibility to have independent team for business data and UI.

MEF, or Manage Extensibility Framework, solves the problem of passing services and other shared objects around in a very neat way. It also enables to find interfaces implementations easily.
Basically “consumer” objects declare what they need with the import attribute, like so:

public ISomeInterface MySome { get; set; }
Somewhere else in the code the exporting object are declared with export attribute.
public class SomeInterfaceImplentation : ISomeInterface
{ /** */ }
Remark properties and methods can be exported. Import can be single object (Import) or many (ImportMany). I strongly recommend that you read the MEF documentation.
To find the implementation for all imports needed by your objects there are 2 actions to be done:
  1. At the start of the program, a “catalog” of types for MEF should be initialized from a list of types, assemblies and / or directories, which will be where MEF will look for locating exports. It’s where you opt-in for the modules of interest. With this library you’ll call the method Composition.Register(), as shown below.
  2. You “compose” the objects that need to be resolved (i.e. which contains imports). With this library you’ll use the method Composition.Compose().
MEF contains various tags to control whether instances are shared or not, whether multiple implementations of an export is valid or not. Again this is covered in thedocumentation.

What does a composite application look like

A composite application is an application where there is a well-known top level UI element, typically a window for a desktop application or a Page for Silverlight. This top level UI element is called the “Shell”.

The shell contains multiple areas where pluggable content will be hosted. Content that is not defined by the shell but in modules that are loaded dynamically (with MEF for example).

For example in the example below a shell is defined with two areas:

  • A list box, showing a single list of documents.
  • An ItemsControl (a TabControl) which can contains numerous items.
			<ColumnDefinition Width="Auto" />
			<ColumnDefinition Width="*" />
			Items="{Binding Documents}"
			ItemsSource="{Binding Workspaces}" 
			ItemContainerStyle="{StaticResource ClosableTabItemStyle}"/>
	</ Grid >
Remark Don’t worry too much about the shell when you create it, it is relatively easy to add or move areas later in the development, if need be. But it’s harder to remove an area in use!

Once a shell has been defined, an interface to it should be exposed via a common library. It could be a “IShellInterface” (see the IShellView in the StockTraderApp for example) or its view model (see MainViewModel in DemoApp for example), or a combination of the two!

As an example here is the IShellView interface from the StockTraderApp sample:

public enum ShellRegion
	// this will set the currently selected item
public interface IShellView
	void ShowShell(); // this will show the shell window
	void Show(ShellRegion region, object data);
	ObservableCollection<object> MainItems { get; }
Once a shell has been defined the composite application can be written. Four short steps are involved:
  1. Define the DLL that are to be dynamically loaded
  2. Create the shell (or skip and import it in 3.)
  3. Export the shell and import the modules
  4. Start the app / initialize the modules
For example, here is what StockTraderApp simplified App code could look like
public partial class App
	public App()
		// 1. Opt-in for the DLL of interest (for import-export resolution)
			, typeof(IShellView).Assembly
			, typeof(MarketModule).Assembly
			, typeof(PositionModule).Assembly
			, typeof(WatchModule).Assembly
			, typeof(NewsModule).Assembly
		// 2. Create the shell
		Logger = new TraceLogger();
		Shell = new Shell();
	public IShellView Shell { get; internal set; }
	public ILoggerFacade Logger { get; private set; }
	public IShellModule[] Modules { get; internal set; }
	public void Run()
		// 3. export the shell, import the modules
		// 4. Start the modules, they would import the shell
		// and use it to appear on the UI
		foreach (var m in Modules)

Main Features

The library grew quite a lot from its humble beginnings. It consists out of two main parts. Feature which are critical to MVMM and composite development and optional features which were a useful additions.

The central class for most features of this library is the Composition class. It also contains two important properties, Catalog and Container, which are the used for MEF to resolve imports and exports. You need to fill the Catalog at the start of the application with Composition.Register(), for example:

static App() // init catalog in App’s static constructor
		, typeof(TitleData).Assembly
		, typeof(SessionInfo).Assembly
Later service imports and exports can be solved with MEF by calling Composition.Compose().

Composition GetView

When an MVVM development pattern is followed, one writes business model and / or view models and views for these data models. Typically this view will only consist of “XAML code”, and their DataContext property will be the business model. Often MVVM helper libraries will provide some ways of finding and loading these views.

In this library the views need to be tagged with a DataViewAttribute which specifies for which model type this view is for:

public partial class CustomerView : UserControl
	public CustomerView()
Then, from a data model, you can automatically load the appropriate view (and set its DataContext) with a call to Composition.GetView(), for example:
public void ShowPopup(object message, object title)
	var sDialog = new MsgBox();
	sDialog.Message = Composition.GetView(message);
	sDialog.Title = Composition.GetView(title);
Often models are not displayed as a result of some method call but simply because they are an item in an ItemsControl or the content of a ContentControl. In this case theDataControl control can be used in XAML to display the item by calling Composition.GetView().

Remark It also brings a DataTemplate like functionality to Silverlight.

Because we use a View-Model approach, the same data model can be shown in multiple places at the same time hence Composition.GetView(), DataViewAttribute and the DataControl have an optional location parameter.

In the example below, the same UserViewModel instance (subclass of WorkspaceViewModel) is used to display both the TabItem header and content using different location parameter (note: location is not set, i.e.it is null, in the second template).

<Style x:Key="ClosableTabItemStyle" TargetType="TabItem" BasedOn="{StaticResource {x:Type TabItem}}">
	<Setter Property="HeaderTemplate">
				<g:DataControl Data="{Binding}" Location="header"/>
	<Setter Property="ContentTemplate">
				<g:DataControl Data="{Binding}"/>
Both views were defined like this (note the first view is the default view, i.e. location is not set, it is null):
public partial class CustomerView : System.Windows.Controls.UserControl
	public CustomerView()
[DataView(typeof(WorkspaceViewModel), "header")]
public partial class CustomerHeaderView : UserControl
	public CustomerHeaderView()

Validation and ViewModelBase

Inspired by Rob Eisenberg’s talk, I created a ViewModelBase class which implement two important interfaces for WPF development: INotifyPropertyChanged andIDataErrorInfo.

The INotifyPropertyChanged implementation is strongly typed (refactor friendly):

public class Person : ViewModelBase
	public string Name
		get { return mName; }
			if (value == mName)
			mName = value;
			OnPropertyChanged(() => Name); // See, no magic string!
	string mName;
The IDataErrorInfo interface allows the WPF bindings to validate the properties they are bound to (if NotifyOnValidationError=true). The implementation inViewModelBase validates the properties using ValidationAttribute(s) on the properties themselves. For example:
public class Person : ViewModelBase
	public string Name { get; set; }
	public string LastName { get; set; }
	[OpenRangeValidation(0, null)]
	public int Age { get; set; }
	public string Initials { get; set; }
	public string InitialsError()
		if (Initials == null || Initials.Length != 2)
			return "Initials is not a 2 letter string";
		return null;
The example above also illustrates some of the new ValidationAttribute subclasses provided in this library, in the Galador.Applications.Validation namespace, i.e.

A control with an invalid binding will automatically be surrounded by a red border (default style), but the error feedback can be customized as shown in this XAML fragment below, which display the error message below the validated text:

<!-- FIRST NAME-->
	Grid.Row="2" Grid.Column="0" 
	Content="First _name:" 
	Target="{Binding ElementName=firstNameTxt}"
	Grid.Row="2" Grid.Column="2" 
	Text="{Binding Path=Customer.FirstName, ValidatesOnDataErrors=True, UpdateSourceTrigger=PropertyChanged, BindingGroupName=CustomerGroup}" 
<!-- Display the error string to the user -->
	Grid.Row="3" Grid.Column="2"
	Content="{Binding ElementName=firstNameTxt, Path=(Validation.Errors).CurrentItem}"

Disconnected Messaging

In a composite application there is a need for components to send messages to each other without knowing each other. The Notifications class and its static methods are here to solve this problem.

First a common message type should be defined in a common library, objects can

  • Subscribe to messages for this type (with the static Subscribe() and Register() methods).
  • Publish messages (with Publish()).
  • Unsubscribe from messages if they are no longer interested in them (with Unsubscribe()).
Remark The subscription thread is an optional parameter that can be either the original thread, the UI thread or a background thread.

To illustrate these functionalities here is a snippet of code from the Notifications’ unit test class.

public void TestSubscribe()
	// subscribe to a message
	Notifications.Subscribe<NotificationsTests, MessageData>(null, StaticSubscribed, ThreadOption.PublisherThread);

	// publish a message
	Notifications.Publish(new MessageData { });

	// unsubscribe to a message below
	Notifications.Unsubscribe<NotificationsTests, MessageData>(null, StaticSubscribed);
static void StaticSubscribed(NotificationsTests t, MessageData md)
	// message handling
Arguably the Notifications.Subscribe() syntax is a bit cumbersome. It’s why an object can also subscribe to multiple message type in one swoop by callingNotifications.Register(this), which will subscribe all its methods with one argument and tagged with NotificationHandlerAttribute, as in
public void TestRegister()
	// register to multiple message type (1 shown below)
	Notifications.Register(this, ThreadOption.PublisherThread);

	// publish a message
	Notifications.Publish(new MessageData { Info = "h" });
public void Listen1(MessageData md)
	// message handling


To avoid the need for code in the UI, yet handle code triggering controls such as a Button or a MenuItem, WPF (and Silverlight 4) came up with commands (ICommand to be exact). When a button is clicked the control action is triggered, and if it has a Command property it will call Command.Execute(parameter), where the parameter is theControl.CommandParameter property.

ViewModels need to expose a Command property whose Execute() method will call one of their methods. For this purpose there is the DelegateCommand.

A delegate command can be created by passing a method to execute and an optional method to check if the method can be executed (which will enable / disable the command source, i.e. the button). For example:

var p = new Person();
var save = new DelegateCommand<Person>(p, aP => { aP.Save(); }, aP => aP.CanSave);
Remark The command will automatically detect INotifyPropertyChanged properties and register to the PropertyChanged event to update its CanExecute() status.

Remark Sometimes you need commands such as “DoAll” as in “CancelAll” or “BuyAll” hence the support of ForeachCommand class, which is an ICommand itself and can watch a list of ICommand, set its status to CanBeExecuted if all its command can be executed.

Other Features

A few other non-essential features found their way into this library.

There is the Invoker, which assist in running code on the GUI thread. It can also be used in both WPF and Silverlight.

public class Invoker
	public static void BeginInvoke(Delegate method, params object[] args)
	public static void DelayInvoke(TimeSpan delay, Delegate method, params object[] args)

There is design time support for the data views. Using the attached property Composition.DesignerDataContext on a data view sets its DataContext at design time:

<UserControl x:Class="MEFedMVVMDemo.Views.SelectedUser"
These view models (i.e. DataView’s DataContext) can compose themselves (i.e. call Composition.Compose(this)) to import some other services.

Remark Having a design time DataContext makes the experience of writing a DataTemplate a whole lot better.

Remark These view models can be made aware that they are in design mode if they implement the IDesignAware interface.

Remark The services loaded by the models can be different in runtime and design time if they are exported with ExportService instead of Export, like so

[ExportService(ServiceContext.DesignTime, typeof(IUsersService))]
public class DesignTimeUsersService : IUsersService

There are multiple variations of the Foreach classes which can be used to observe an IEnumerable or an ObservableCollection and take whatever action is appropriate when something in the collection changes.


Hopefully this article and the samples it contains will have shown what a composite application architecture looks like and how this library makes it easy to solve the key problems most often met by a composite application:
  • Resolving services dependencies using MEF.
  • Finding DataView for DataModel with DataControl or Composition.GetView().
  • Implement common MVVM pattern: the ICommand (with DelegateCommand and ForeachCommand) and disconnected messaging (with Notifications).
  • Implement data binding validation with ValidationAttribute in a subclass of ViewModelBase.


This library will work with the Client Profile for .NET4 and Silverlight 4.

If need to be ported to .NET3.5 there are two obstacles.

  • MEF, which is on CodePlex.
  • And the Validator class, used in the ViewModelBase to validate the properties from the ValidationAttribute(s), i.e. implement the IDataErrorInfo interface. Only two methods need to be reimplemented from the Validator.


MEF on Codeplex (it’s also part of .NET4 & Silverlight 4)

Prism, aka the composite application library

Josh Smith on MVVM

Rob Eisenberg on MVVM

The MEFedMVVM library


Categories: WPF
Permalink | Comments (0) | Post RSSRSS comment feed

Love Visual Studio 2010, hate the new help?

Download the new H3 Viewer[^]
And it will bring back the desktop help system!
More importantly it's bring back the index and TOC!!!

Permalink | Comments (0) | Post RSSRSS comment feed

About project management

It has been 10 years I have been working as a programmer. In this 10 years I had a “proper” project manager only twice. When I started and now. Unfortunately when I started some of the project manager were in position of power (compare to my position) and behaved more as commander than … “project manager”. Which alienated this position to me.

Now the project manager is quite good and I benefit from many years of experience. And I’m observing intently. And I want to clarify what is this project management thing. Because, I suspect, it might be useful! smile_teeth

And maybe I’d like to be project manager too oneday, if only for financial reason! smile_wink


As I can see it now, the good project manager, in his role as project manager, has the following responsibility: clarify the project’s goals, and schedule everyone’s work so that we all progress smoothly at a regular pace, satisfy customer expectation (for deadline and feature) AND satisfy developer expectation (for implementation strategies).

It doesn’t even need to be the architect. For example in our case, since we use prism, WPF, MVVM and other technologies (which, I’m glad to say, I was quite instrumental in bringing and doing the core implementation) our project quite naturally divide itself into some simple uncoupled part (Save for one service which we have identified as needing further simplification) and there isn’t much architecture or need for it happening now. We just expand on our current model.


Anyway, what is our good project manager doing (name is Patrick Kealy by the way)?
Well, mostly, he is writing down in great detail our many discussion and using Microsoft project to schedule everyone’s tasks to the day (approximately). Which is harder than it seems… The end result when he comes with our tasks for the next few days, is that it all seems simple and easy!

Better, if someone raise an issue (I’m good at raising issue by the way! smile_tongue) we talk about it. If there is clear criteria for something, we act on it. If it takes too much time, we put it down on the schedule so that the deadlines are still met, but so that the developer knows that his concern will be addressed (after release v0.xxx). I think this is very important for an harmonious, hence productive, work environment.

In a few words, it’s not so much about deciding what to do, but more about making clear what we decided and propose an efficient roadmap.


I think his work has provided some focus to Agdat (which was erring somehow in random direction before he started). And now the application starts to looks like something!

It’s late now, so I will just put a yummy screenshot:


Categories: General
Permalink | Comments (2) | Post RSSRSS comment feed