A blog which discusses various GPU applications including visualization, GPGPU and games.

Saturday, January 31, 2009

Direct2D and DirectWrite: An example

I heavily use Direct3D in my Windows graphics programs. So why would I want to use an API meant for 2D graphics? The answer is simple: text rendering.

Text rendering has always been a massive pain in 3D APIs, but rightfully so. Why should a low-level GPU API care about text? One solution to this is to write one's own text rendering class. I would much rather use a standard library, though. That's where Direct2D and DirectWrite come into play.

As I mentioned in a previous post, Direct2D is actually independent from Direct3D. You can write an application that only uses Direct2D and never actually touch Direct3D in your code. (Direct2D, of course, uses Direct3D internally). This may sound inflexible when wanting to mix it with Direct3D, but the situation is quite the opposite. Thanks to DXGI, it is possible to obtain the DXGI surface representation of a Direct3D texture and hand it off to Direct2D.

I demonstrate a simple example. In this program (a 3D vector field plotter, as a matter of fact), I am interested in displaying the time it takes to render a frame, as well as a simple performance log graph. I was able to eliminate a good chunk of D3D code and replace it with a small section of D2D code.



Big deal, right? Check this out.



Would you want to try rendering Gabriola by hand in a 3D graphics API? :)

How about a thicker line in the performance graph, and with a dashed stroke style?



I plan on posting more complete code snippets later, but for now I will get right down to the fundamental code.

if (FAILED(DWriteCreateFactory(DWRITE_FACTORY_TYPE_SHARED, __uuidof(IDWriteFactory), reinterpret_cast<IUnknown **>(&m_spDWriteFactory)))) exit(EXIT_FAILURE);
m_spDWriteFactory->CreateTextFormat(L"Gabriola", NULL, DWRITE_FONT_WEIGHT_NORMAL, DWRITE_FONT_STYLE_NORMAL, DWRITE_FONT_STRETCH_NORMAL, 30, L"", &m_spTextFormat);

The first eye-pleasing line creates a DirectWrite factory object. We then use the factory to create a new text format. A text format encapsulates basic information such as the font, font weight, font style, font size, etc.

We then use Direct2D to draw the string.

wstring mystring = L"Hello, world!";
m_spBackBufferRT->DrawText(mystring, mystring.length(), m_spTextFormat, D2D1::RectF(0.0f, 0.0f, 150.0f, 50.0f), m_spTextBrush, D2D1_DRAW_TEXT_OPTIONS_NO_CLIP);

As can be seen above, one of the arguments to the DrawText function is the DirectWrite text format we created earlier. In a future post I will cover in greater detail how I obtained m_spBackBufferRT.

I have not forgotten about OpenGL: in such situations I highly recommend the QuesoGLC text renderer. I expect to write up QuesoGLC examples as well in the future.

Monday, January 26, 2009

Direct2D and DirectWrite

I recently installed the Windows 7 SDK so I could experiment with Direct2D and DirectWrite. I am very pleased with the APIs; they definitely simplify the text handling in my Direct3D applications.

One of the great design points of Direct2D is that it can inter-operate with Direct3D textures through DXGI.

Expect code snippets and screenshots soon!

Monday, January 5, 2009

D3D11 Types

Direct3D 11 introduces a number of new datatypes to HLSL.

New read-only types:
  • ByteAddressBuffer
  • StructuredBuffer
New read-write types:
  • RWByteAddressBuffer
  • RWStructuredBuffer
  • RWBuffer
  • RWTexture1D/RWTexture1DArray
  • RWTexture2D/RWTexture2DArray
  • RWTexture3D
New stream types:
  • AppendByteAddressBuffer/ConsumeByteAddressBuffer
  • AppendStructuredBuffer/ConsumeStructuredBuffer
The (RW)ByteAddressBuffer type is a buffer that is DWORD-aligned byte-addressable. What this means is that I can pack an arbitrary mix of scalar and struct types into a buffer, and then pull the data out with a cast.

The (RW)StructuredBuffer type is an extension to the Buffer type in that it allows arbitrary structures to be stored. For example, we might wish to store per-instance data in a structure for cleaner code:

struct Vert
{
float3 color1, color2;
float mixamount;
float3 deform;
};

StructuredBuffer<Vert> data;

PS_INPUT VS(VS_INPUT input)
{
Vert v = data[input.instanceid];
// Use v to compute vertex properties
}

The RWBuffer type is simply an extension to the Buffer type in that it allows reading and writing in pixel and compute shaders.

Next, we have the read-write texture types. These new types will have exciting new possibilities and will eliminate the need to ping-pong two textures in some cases. These types are pixel-addressable.

Finally, we have the stream data types. The stream types are meant for applications that deal with variable amounts of data that need not preserve ordering of records. For example, say we want to emit per-fragment data from the pixel shader, but not into a texture. We can define a structure that describes a fragment, and then we can emit the structures.

struct Fragment
{
float3 color;
float depth;
uint2 location;
};

AppendStructuredBuffer<Fragment> data;

void PS(...)
{
Fragment f;
f.color = ...;
f.depth = ...;
f.location = ...;
data.Append(f);
}

Now say we would like to process each fragment in a compute shader.

struct Fragment
{
float3 color;
float depth;
uint2 location;
};

ConsumeStructuredBuffer<Fragment> data;
RWTexture2D frame;

void CS(...)
{
Fragment f;
data.Consume(f);
// Compute result and write to texture
frame[f.location] = ...;
}

Friday, January 2, 2009

2D Imposters

A 2D imposter is a simple representation of a geometric shape. Why would we care about these? Imagine rendering millions of circles. Not only would the vertex shader be a bottleneck, but the quality would not be very good due to multisampling. Instead, we can use imposters.

In this example, I am rendering each circle as a simple quadrilateral. The real magic happens in the pixel shader.



Let's take a look at the shader code.

cbuffer shapes
{
float2 square[] =
{
float2(-1.0f, -1.0f),
float2(-1.0f, 1.0f),
float2(1.0f, -1.0f),
float2(1.0f, 1.0f)
};
};

PS_INPUT VS(VS_INPUT input)
{
PS_INPUT output;

float3 vpos = float3(0.4f * square[input.vertid], 0.0f) + 2.6f * input.position;

output.position = mul(float4(vpos, 1.0f), mul(World, Projection));
output.color = input.color;
output.qpos = 1.1f * square[input.vertid];

return output;
}

float4 PS(PS_INPUT input) : SV_Target
{
float dist = length(input.qpos);
float fw = 0.8f * fwidth(dist);
float circ = smoothstep(fw, -fw, dist - 1.0f);
return float4(input.color, circ);
}

The utility of imposter shapes shows through when looking at the vertex shader. The vertex shader need only run 4 times per circle. As you can see, I am not only transforming the square's vertices, I am also passing the raw vertices over to the pixel shader. The pixel shader uses this to measure the distance between the center of the square and each fragment in order to determine which fragments should be visible and which should be blended away.

Why use imposters? Consider a visualization application which needs to render thousands of circles very quickly. This method allows for the rendering of efficient, high-quality circles. In a future post, I will show how this method can be adapted to rendering spheres as quadrilaterals.

Followers

Contributors