Formeln

Saturday, March 30, 2013

Calculating Frames per Second

Motivation

One widely used metric to measure the performance of a render engine is frames per second. I will show you in this tutorial how to implement a class to perform this measurement.

FrameCounter Class

Source Code

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Runtime.InteropServices;

namespace Apparat
{
    public class FrameCounter
    {
        [DllImport("Kernel32.dll")]
        private static extern bool QueryPerformanceCounter(
            out long lpPerformanceCount);

        [DllImport("Kernel32.dll")]
        private static extern bool QueryPerformanceFrequency(
            out long lpFrequency);

        #region Singleton Pattern
        private static FrameCounter instance = null;
        public static FrameCounter Instance
        {
            get
            {
                if (instance == null)
                {
                    instance = new FrameCounter();
                }
                return instance;
            }
        }
        #endregion

        #region Constructor
        private FrameCounter()
        {
            msPerTick = (float)MillisecondsPerTick;
        }
        #endregion

        float msPerTick = 0.0f;

        long frequency;
        public long Frequency
        {
            get
            {
                QueryPerformanceFrequency(out frequency);
                return frequency;
            }
        }

        long counter;
        public long Counter
        {
            get
            {
                QueryPerformanceCounter(out counter);
                return counter;
            }
        }

        public double MillisecondsPerTick
        {
            get
            {
                return (1000L) / (double)Frequency;
            }
        }

        public delegate void FPSCalculatedHandler(string fps);
        public event FPSCalculatedHandler FPSCalculatedEvent;

        long now;
        long last;
        long dc;
        float dt;
        float elapsedMilliseconds = 0.0f;
        int numFrames = 0;
        float msToTrigger = 1000.0f;

        public float Count()
        {
            last = now;
            now = Counter;
            dc = now - last;
            numFrames++;

            dt = dc * msPerTick;

            elapsedMilliseconds += dt;

            if (elapsedMilliseconds > msToTrigger)
            {
                float seconds = elapsedMilliseconds / 1000.0f;
                float fps = numFrames / seconds;

                if (FPSCalculatedEvent != null)
                    FPSCalculatedEvent("fps: " + fps.ToString("0.00"));
               
                elapsedMilliseconds = 0.0f;
                numFrames = 0;
            }

            return dt;
        }
    }
}

QueryPerformanceFrequency and QueryPerformanceCounter

To count the milliseconds during a render cycle, I use the two native methods QueryPerformanceFrequency and QueryPerformanceCounter. QueryPerformanceFrequency returns, how many ticks the high-resolution performance counter of your CPU makes in a second, you have to determine the frequency of your system just once because this value will not change on your system. QueryPerformanceCounter returns the number of current ticks of your system. To measure the number of ticks during a certain time span you have to get the number of ticks of your system at the beginning and the end of this time span and calculate the difference between the two.

Because you know how many ticks your system makes in a second, you can calculate the time span between the two measurements.

Count Function 

Let's have a look at the Count function in detail:

long now;
long last;
long dc;
float dt;
float elapsedMilliseconds = 0.0f;
int numFrames = 0;
float msToTrigger = 1000.0f;

public float Count()
{
  last = now;
  now = Counter;
  dc = now - last;
  numFrames++;

  dt = dc * msPerTick;

  elapsedMilliseconds += dt;

  if (elapsedMilliseconds > msToTrigger)
  {
    float seconds = elapsedMilliseconds / 1000.0f;
    float fps = numFrames / seconds;

    if (FPSCalculatedEvent != null)
      FPSCalculatedEvent("fps: " + fps.ToString("0.00"));
               
    elapsedMilliseconds = 0.0f;
    numFrames = 0;
  }

  return dt;
}

Every time this function gets called, I get the current value of the counter by calling the Counter Property. The previous value of the counter is assigned to the variable last and I calculate the difference dc of the two counter values, which is the number of ticks performed in the time span between two calls to this function. Because I calculated how many milliseconds it takes between two ticks (msPerTick), I can multiply dc with msPerTick to get the time span dt in milliseconds between two calls of this function.

The time span dt gets added to the variable elapsedMilliseconds. Furthermore I increment the variable numFrames with every call to the Count function. If elapsedMilliseconds is greater than the predefined time span msToTrigger, I calculate the frames per second fps and fire the  event FPSCalculatedEvent.

I call the Count function in every render cycle in the RenderManager:

FrameCounter fc = FrameCounter.Instance;

public void renderScene()
{
  while (true)
  {
    fc.Count();
               
    DeviceManager dm = DeviceManager.Instance;
    dm.context.ClearRenderTargetView(dm.renderTarget, new Color4(0.75f, 0.75f, 0.75f));

    Scene.Instance.render();

    dm.swapChain.Present(syncInterval, PresentFlags.None);
  }
}

FPSCalculatedEvent

I defined a delegate and an event in the FrameCounter class:

public delegate void FPSCalculatedHandler(string fps);
public event FPSCalculatedHandler FPSCalculatedEvent;

The event gets fired in the Count function, when the frames per secondes have been calculated. I'll get back to this delegate and event when it comes to presenting the fps on the RenderControl.

SyncInterval

Let's take a look at the render loop again:

public void renderScene()
{
  while (true)
  {
    fc.Count();
               
    DeviceManager dm = DeviceManager.Instance;
    dm.context.ClearRenderTargetView(dm.renderTarget, new Color4(0.75f, 0.75f, 0.75f));

    Scene.Instance.render();

    dm.swapChain.Present(syncInterval, PresentFlags.None);
  }
}

I introduced the variable syncInterval when calling the Present method of the swap chain.
The value of syncInterval determines, how the rendering is synchronized with the vertical blank.
If syncInterval  is 0, no synchronisation takes place, if syncInterval is 1,2,3 or 4, the frame is rendered after the nth interval (MSDN docs).

Furthermore I implemented a method to switch the syncInterval in the RenderManager externally:


int syncInterval = 1;

public void SwitchSyncInterval()
{
  if (syncInterval == 0)
  {
    syncInterval = 1;
  }
  else if (syncInterval == 1)
  {
    syncInterval = 0;
  }
}

This SwitchSyncInterval method is called in the RenderControl and you can switch the syncInterval with the F2 key:

private void RenderControl_KeyUp(object sender, KeyEventArgs e)
{
  if (e.KeyCode == Keys.F1)
  {
    CameraManager.Instance.CycleCameras();
  }
  else if (e.KeyCode == Keys.F2)
  {
    RenderManager.Instance.SwitchSyncInterval();
  }

  CameraManager.Instance.currentCamera.KeyUp(sender, e);
}

Displaying Frames per Second

I added a label control called DebugTextLabel to the RenderControl in order to display a string on top of the RenderControl to have a method to display text. Rendering text seems to be a bit more complicated with DirectX 11 than it was with DirectX 9. (If you know a good reference for rendering text in DirectX 11, please leave a comment). I will use this interim solution for displaying text until I wrote a parser for true type fonts ;)

The delegate and event for sending the calculated frames per second is defined in the FrameCounter class (see above) and the event is fired when the frames per second are calculated.

The method Instance_FPSCalculatedEvent in the class RenderControl is a handler for the FPSCalculatedEvent and is registered in the constructor of the RenderControl:

public RenderControl()
{
  InitializeComponent();
  this.MouseWheel += new MouseEventHandler(RenderControl_MouseWheel);
  FrameCounter.Instance.FPSCalculatedEvent += new FrameCounter.FPSCalculatedHandler(Instance_FPSCalculatedEvent);
}

This is the code for the handler Instance_FPSCalculatedEvent in the RenderControl:


delegate void setFPS(string fps);
void Instance_FPSCalculatedEvent(string fps)
{
  if (this.InvokeRequired)
  {
    setFPS d = new setFPS(Instance_FPSCalculatedEvent);
    this.Invoke(d, new object[] { fps });
  }
  else
  {
    this.DebugTextLabel.Text = fps;
  }
}

The label is set with the string fps, that comes as an argument from the event. Because the render loop works in a different thread than the DebugTextLabel was created in and we try to set this control from the render loop thread, we have to use the InvokeRequired property of the RenderControl.

Results

Now we can display the current frame rate of the render engine:

~60 Frames per Second with SyncInterval = 1

Several thousand Frames per Second with SyncInterval = 0

To play around a bit, insert a Thread.Sleep(ms)statement to the method renderScene in the RenderManager class and observe how the frame rate changes with different values for ms and depending on if you use syncInterval = 1 or syncInterval = 0. Also try to set the syncInterval in the render loop to values of 2,3,4 and observe the effect on the frames per second.

The source code to this tutorial is here.

Have fun!

Sunday, March 24, 2013

The Ego Camera

With an Ego Camera you use the mouse to control the pitch and yaw of the camera and the WSAD keys to move forward and backward and strafe left and right. I constrain the pitch of the camera at +90 and -90 degree.

Abstract Camera Class


using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Windows.Forms;
using SlimDX;

namespace Apparat
{
    public abstract class Camera
    {
        public Vector3 eye;
        public Vector3 target;
        public Vector3 up;

        public Matrix view = Matrix.Identity;
        public Matrix perspective = Matrix.Identity;
        public Matrix viewPerspective = Matrix.Identity;

        public Matrix View
        {
            get { return view; }
        }

        public void setPerspective(float fov, float aspect, float znear, float zfar)
        {
            perspective = Matrix.PerspectiveFovLH(fov, aspect, znear, zfar);
        }

        public void setView(Vector3 eye, Vector3 target, Vector3 up)
        {
            view = Matrix.LookAtLH(eye, target, up);
        }

        public Matrix Perspective
        {
            get { return perspective; }
        }

        public Matrix ViewPerspective
        {
            get { return view * perspective; }
        }

        public bool dragging = false;
        public int startX = 0;
        public int deltaX = 0;

        public int startY = 0;
        public int deltaY = 0;

        public abstract void MouseUp(object sender, MouseEventArgs e);
        public abstract void MouseDown(object sender, MouseEventArgs e);
        public abstract void MouseMove(object sender, MouseEventArgs e);
        public abstract void MouseWheel(object sender, MouseEventArgs e);

        public abstract void KeyPress(object sender, KeyPressEventArgs e);
        public abstract void KeyDown(object sender, KeyEventArgs e);
        public abstract void KeyUp(object sender, KeyEventArgs e);
    }
}

Because we need the WSAD keys for strafing, the abstract class needs the declaration of the handlers for handling input from keys. These have also to be implemented in the OrbitCamera and in the OrbitPanCamera, but remain empty.

Ego Camera Code


using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using SlimDX;
using SlimDX.Direct3D11;
using SlimDX.DXGI;

namespace Apparat
{
    public class EgoCamera : Camera
    {
        Vector3 look;

        public EgoCamera()
        {
            look = new Vector3(1, 0, 0);
            up = new Vector3(0, 1, 0);
            eye = new Vector3(0, 1, 0);
            target = eye + look;

            view = Matrix.LookAtLH(eye, target, up);
            perspective = Matrix.PerspectiveFovLH((float)Math.PI / 4, 1.3f, 0.0f, 1.0f);
        }

        new public Matrix  ViewPerspective
        {
            get
            {
                if (strafingLeft)
                    strafe(1);

                if (strafingRight)
                    strafe(-1);

                if (movingForward)
                    move(1);

                if (movingBack)
                    move(-1);
                
                return view * perspective;
            }
        }

        public void yaw(int x)
        {
            Matrix rot = Matrix.RotationY(x / 100.0f);
            look = Vector3.TransformCoordinate(look, rot);

            target = eye + look;
            view = Matrix.LookAtLH(eye, target, up);
        }


        float pitchVal = 0.0f;
        public void pitch(int y)
        {
            Vector3 axis = Vector3.Cross(up, look);
            float rotation = y / 100.0f;
            pitchVal = pitchVal + rotation;

            float halfPi = (float)Math.PI / 2.0f;

            if (pitchVal < -halfPi)
            {
                pitchVal = -halfPi;
                rotation = 0;
            }
            if (pitchVal > halfPi)
            {
                pitchVal = halfPi;
                rotation = 0;
            }

            Matrix rot = Matrix.RotationAxis(axis, rotation);

            look = Vector3.TransformCoordinate(look, rot);
            
            look.Normalize();
            
            target = eye + look;
            view = Matrix.LookAtLH(eye, target, up);
        }

        public override void MouseUp(object sender, System.Windows.Forms.MouseEventArgs e)
        {
            dragging = false;
        }

        public override void MouseDown(object sender, System.Windows.Forms.MouseEventArgs e)
        {
            dragging = true;
            startX = e.X;
            startY = e.Y;
        }

        public override void MouseMove(object sender, System.Windows.Forms.MouseEventArgs e)
        {
            if (dragging)
            {
                int currentX = e.X;
                deltaX = startX - currentX;
                startX = currentX;

                int currentY = e.Y;
                deltaY = startY - currentY;
                startY = currentY;

                if (e.Button == System.Windows.Forms.MouseButtons.Left)
                {
                    pitch(deltaY);
                    yaw(-deltaX);
                }
            }
        }

        public void strafe(int val)
        {
            Vector3 axis = Vector3.Cross(look, up);
            Matrix scale = Matrix.Scaling(0.1f, 0.1f, 0.1f);
            axis = Vector3.TransformCoordinate(axis, scale);

            if (val > 0)
            {
                eye = eye + axis;
            }
            else
            {
                eye = eye - axis;
            }
            
            target = eye + look;
            view = Matrix.LookAtLH(eye, target, up);
        }

        public void move(int val)
        {
            Vector3 tempLook = look;
            Matrix scale = Matrix.Scaling(0.1f, 0.1f, 0.1f);
            tempLook = Vector3.TransformCoordinate(tempLook, scale);


            if (val > 0)
            {
                eye = eye + tempLook;
            }
            else
            {
                eye = eye - tempLook;
            }
            
            target = eye + look;
            view = Matrix.LookAtLH(eye, target, up);
        }

        // Nothing to do here
        public override void MouseWheel(object sender, System.Windows.Forms.MouseEventArgs e){}



        public override void KeyPress(object sender, System.Windows.Forms.KeyPressEventArgs e)
        {
        }

        bool strafingLeft = false;
        bool strafingRight = false;
        bool movingForward = false;
        bool movingBack = false;

        public override void KeyDown(object sender, System.Windows.Forms.KeyEventArgs e)
        {
            if (e.KeyCode == System.Windows.Forms.Keys.W)
            {
                movingForward = true;
            }
            else if (e.KeyCode == System.Windows.Forms.Keys.S)
            {
                movingBack = true;
            }
            else if (e.KeyCode == System.Windows.Forms.Keys.A)
            {
                strafingLeft = true;
            }
            else if (e.KeyCode == System.Windows.Forms.Keys.D)
            {
                strafingRight = true;
            }
        }

        public override void KeyUp(object sender, System.Windows.Forms.KeyEventArgs e)
        {
            if (e.KeyCode == System.Windows.Forms.Keys.W)
            {
                movingForward = false;
            }
            else if (e.KeyCode == System.Windows.Forms.Keys.S)
            {
                movingBack = false;
            }
            else if (e.KeyCode == System.Windows.Forms.Keys.A)
            {
                strafingLeft = false;
            }
            else if (e.KeyCode == System.Windows.Forms.Keys.D)
            {
                strafingRight = false;
            }
        }
    }
}

Key Handling

In the lowest section of the code I implemented four booleans in order to flag, if a key keeps being pressed. As long as a key is pressed, these variables stay true. You may wonder, why I don't use the KeyPress handler for this. As soon as a key is pressed, the KeyPress event is fired and the KeyPress handler is called. If the key remains pressed, the event is fired repeatedly and the therefore the handler is called repeatedly. The problem is: this event is fired once a key is pressed, followed by a pause and then the event is fired at a low frequency at about 15 Hz (roughly estimated, I haven't found any reference).

This video illustrates the issue:

I opened notepad and kept the 'a' key pressed. After a short pause, the events keeps being fired at a low frequency.

As the Render Loop works with 60Hz or more, using the KeyPress event would result in a stuttering motion of the camera, if used for triggering the strafing methods, as the ViewProjection matrix of the camera would only be fired every fourth frame  (again, roughly estimated).

The ViewPerspective Property

Also observe, that I overrode the ViewPerspective property. The objects in the scene call this property in every render cycle, in order to make sure, that these objects get a updated ViewPerspective matrix, the update of the strafing methods happens here.

Warning: this is not a good implementation and only to prevent the camera from stuttering in this tutorial. The problem with the current approach is, that the objects in the scene trigger a transformation by using this property. So every call to this property results in a transformation of the camera. Having many objects would give noticeable effects on the displayed scene. In a later tutorial the call to ViewPerspective matrix will be moved into the beginning of the render loop and called once and the objects in the scene will all see the same ViewPerspective matrix. This approach has also the advantage, that expensive calculations will be only performed once per render loop.

Strafing

Strafing is a translation along the cameras x-axis and z.axis. Up to now we just used the eye, target and up vectors for creating the view matrix of the camera. In order to do implement strafing, I need two additional vectors: look and axis. look is the direction the camera is looking in and axis is orthogonal to up and look
Strafing forward and backward is then adding a scaled look vector to the eye vector. Strafing left and right is accomplished by adding a scaled axis vector to the eye vector. To keep the creation of the view matrix consistent, the target vector has to be updated with the same vector as the eye vector.

Looking

To look around, I take the look vector and rotate this vector around the cameras y axis for looking left and right. In order to look up and down the look vector is rotated around the cameras z axis. The y axis is always the up vector, which isn't touched at all and is (0,1,0) at all times. Because the camera is rotating, the current z axis of the camera has to be recomputed, with every rotation around the y axis. The cameras current z axis (just called axis in the source code) is therefore computed by taking the cross product of the up vector and the look vector.
To constrain looking up and down, the pitch angle is limited to +PI/2 and -PI/2.

Camera Manager


using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using SlimDX;

namespace Apparat
{
    public class CameraManager
    {
        #region Singleton Pattern
        private static CameraManager instance = null;
        public static CameraManager Instance
        {
            get
            {
                if (instance == null)
                {
                    instance = new CameraManager();
                }
                return instance;
            }
        }
        #endregion

        #region Constructor
        private CameraManager() 
        {
            OrbitPanCamera ocp = new OrbitPanCamera();
            OrbitCamera oc = new OrbitCamera();
            EgoCamera ec = new EgoCamera();
            cameras.Add(ocp);
            cameras.Add(oc);
            cameras.Add(ec);

            currentIndex = 0;
            currentCamera = cameras[currentIndex];
        }
        #endregion

        List<camera> cameras = new List<camera>();

        public Camera currentCamera;
        int currentIndex;

        public Matrix ViewPerspective
        {
            get
            {
                if (currentCamera is EgoCamera)
                {
                    return ((EgoCamera)currentCamera).ViewPerspective;
                }
                else
                {
                    return currentCamera.ViewPerspective;
                }
            
            }
        }

        public string CycleCameras()
        {
            int numCameras = cameras.Count;
            currentIndex = currentIndex + 1;
            if (currentIndex == numCameras)
                currentIndex = 0;
            currentCamera = cameras[currentIndex];
            return currentCamera.ToString();
        }
    }
}

The EgoCamera is added to the camera manager by creating an object of it and adding it to the cameras list. I had to add the ViewPerspective property, to be able to cast the current camera to the EgoCamera if the current camera is of this type. This is necessary to call the ViewPerspective property of the EgoCamera, because I did override the ViewPerspective property of the abstract Camera class in the EgoCamera.

Results

This video demonstrates the behaviour of the Ego Camera:



At this point I am using constants for the translations and rotations. In order to have defined velocities for these motions, we need to know, how much time has passed. This will be addressed in the next tutorial.

You can download the source code of this tutorial here.

Saturday, March 23, 2013

The Camera Manager

In the last tutorial we saw, that with a growing number of cameras we need some additional code to manage different cameras. The reasons were: Renderables need access to the current camera, the different cameras share a set of variables and the handling of mouse and keyboard events is for each camera different.

The obvious way to address this problem is either by providing an interface or an abstract class. I chose an abstract class:

Abstract Class Camera


using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Windows.Forms;
using SlimDX;

namespace Apparat
{
    public abstract class Camera
    {
        public Vector3 eye;
        public Vector3 target;
        public Vector3 up;

        public Matrix view = Matrix.Identity;
        public Matrix perspective = Matrix.Identity;
        public Matrix viewPerspective = Matrix.Identity;

        public Matrix View
        {
            get { return view; }
        }

        public void setPerspective(float fov, float aspect, float znear, float zfar)
        {
            perspective = Matrix.PerspectiveFovLH(fov, aspect, znear, zfar);
        }

        public void setView(Vector3 eye, Vector3 target, Vector3 up)
        {
            view = Matrix.LookAtLH(eye, target, up);
        }

        public Matrix Perspective
        {
            get { return perspective; }
        }

        public Matrix ViewPerspective
        {
            get { return view * perspective; }
        }

        public bool dragging = false;
        public int startX = 0;
        public int deltaX = 0;

        public int startY = 0;
        public int deltaY = 0;

        public abstract void MouseUp(object sender, MouseEventArgs e);
        public abstract void MouseDown(object sender, MouseEventArgs e);
        public abstract void MouseMove(object sender, MouseEventArgs e);
        public abstract void MouseWheel(object sender, MouseEventArgs e);
    }
}

These are the variables and methods all cameras share. Furthermore cameras deriving from this abstract class have to implement the handlers for interacting with the control, like MouseUp. I deleted the corresponding variables and methods from the OrbitCamera and OrbitPanCamera classes did override the handlers. For the sake of brevity I will not post the code here but refer to the source code at the bottom of this tutorial.

CameraManager Class


using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;

namespace Apparat
{
    public class CameraManager
    {
        #region Singleton Pattern
        private static CameraManager instance = null;
        public static CameraManager Instance
        {
            get
            {
                if (instance == null)
                {
                    instance = new CameraManager();
                }
                return instance;
            }
        }
        #endregion

        #region Constructor
        private CameraManager() 
        {
            OrbitPanCamera ocp = new OrbitPanCamera();
            OrbitCamera oc = new OrbitCamera();
            cameras.Add(ocp);
            cameras.Add(oc);

            currentIndex = 0;
            currentCamera = cameras[currentIndex];
        }
        #endregion

        List<camera> cameras = new List<camera>();

        public Camera currentCamera;
        int currentIndex;

        public string CycleCameras()
        {
            int numCameras = cameras.Count;
            currentIndex = currentIndex + 1;
            if (currentIndex == numCameras)
                currentIndex = 0;
            currentCamera = cameras[currentIndex];
            return currentCamera.ToString();
        }
    }
}

The CameraManager is now responsible for creating the cameras and uses the Singleton pattern, as it is the only object, the rest of the engine is talking to, when cameras need to be accessed. Consequently, the OrbitCamera and OrbitPanCamera are not Singletons anymore.

The CameraManager holds a list of cameras, which I populate in its contructor. In order to change the camera, I added the method CycleCameras. The engine can gain access to the current camera via the currentCamera variable with CameraManager.Instance.currentCamera.

RenderControl


using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Drawing;
using System.Data;
using System.Linq;
using System.Text;
using System.Windows.Forms;
using Apparat.Renderables;

namespace Apparat
{
    public partial class RenderControl : UserControl
    {
        public RenderControl()
        {
            InitializeComponent();
            this.MouseWheel += new MouseEventHandler(RenderControl_MouseWheel);
        }

        public void init()
        {
            DeviceManager.Instance.createDeviceAndSwapChain(this);
            RenderManager.Instance.init();

            Grid grid = new Grid(10, 1.0f);
            TriangleEF triangle = new TriangleEF();
            Scene.Instance.addRenderObject(triangle);
            Scene.Instance.addRenderObject(grid);
        }

        public void shutDown()
        {
            RenderManager.Instance.shutDown();
            DeviceManager.Instance.shutDown();
        }

        private void RenderControl_MouseUp(object sender, MouseEventArgs e)
        {
            CameraManager.Instance.currentCamera.MouseUp(sender, e);
        }

        private void RenderControl_MouseDown(object sender, MouseEventArgs e)
        {
            CameraManager.Instance.currentCamera.MouseDown(sender, e);
        }

        private void RenderControl_MouseMove(object sender, MouseEventArgs e)
        {
            CameraManager.Instance.currentCamera.MouseMove(sender, e);
        }

        void RenderControl_MouseWheel(object sender, MouseEventArgs e)
        {
            CameraManager.Instance.currentCamera.MouseWheel(sender, e);
        }

        private void RenderControl_KeyUp(object sender, KeyEventArgs e)
        {
            if (e.KeyCode == Keys.F1)
            {
                CameraManager.Instance.CycleCameras();
            }
        }
    }
}

In the RenderControl the mouse handlers refer to the current camera of the CameraManger and call
the according handler. Furthermore I use the KeyUp handler and the F1 key to cycle through the cameras.

Renderables

Now the render methods of the Renderables have to updated like in this example, where the ViewPerspective matrix is set via the CameraManager.

public override void render()
{
  Matrix ViewPerspective = CameraManager.Instance.currentCamera.ViewPerspective;
  tmat.SetMatrix(ViewPerspective);

  // configure the Input Assembler portion of the pipeline with the vertex data
  DeviceManager.Instance.context.InputAssembler.InputLayout = layout;
  DeviceManager.Instance.context.InputAssembler.PrimitiveTopology = PrimitiveTopology.LineList;
  DeviceManager.Instance.context.InputAssembler.SetVertexBuffers(0, new VertexBufferBinding(vertexBuffer, 12, 0));

  technique = effect.GetTechniqueByName("Render");

  EffectTechniqueDescription techDesc;
  techDesc = technique.Description;

  for (int p = 0; p < techDesc.PassCount; ++p)
  {
    technique.GetPassByIndex(p).Apply(DeviceManager.Instance.context);
    DeviceManager.Instance.context.Draw(numVertices, 0);
  }
}

Conclusion

When dealing with several cameras a CameraManager is needed to deal with them in a flexible way. This CameraManager will be extended in future tutorials. 

You can download the code for this tutorial here.

Friday, March 22, 2013

Orbit and Pan Camera

In the last tutorial I explained how to implement an Orbit Camera, with which you can circle around a given point. In this tutorial I will explain how to add the capability to pan. Panning means translating the camera parallel to the X-Z plane.

Source Code of the OrbitPanCamera


using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using SlimDX;
using SlimDX.Direct3D11;
using SlimDX.DXGI;

namespace Apparat
{
    public class OrbitPanCamera
    {
        #region Singleton Pattern
        private static OrbitPanCamera instance = null;
        public static OrbitPanCamera Instance
        {
            get
            {
                if (instance == null)
                {
                    instance = new OrbitPanCamera();
                }
                return instance;
            }
        }
        #endregion

        #region Constructor
        private OrbitPanCamera()
        {
            eye = new Vector3(4, 2, 0);
            target = new Vector3(0, 0, 0);
            up = new Vector3(0, 1, 0);

            view = Matrix.LookAtLH(eye, target, up);
            perspective = Matrix.PerspectiveFovLH((float)Math.PI / 4, 1.3f, 0.0f, 1.0f);
        }
        #endregion

        Vector3 eye;
        Vector3 target;
        Vector3 up;

        Matrix view = Matrix.Identity;
        Matrix perspective = Matrix.Identity;
        Matrix viewPerspective = Matrix.Identity;

        public Matrix View
        {
            get { return view; }
        }

        public void setPerspective(float fov, float aspect, float znear, float zfar)
        {
            perspective = Matrix.PerspectiveFovLH(fov, aspect, znear, zfar);
        }

        public void setView(Vector3 eye, Vector3 target, Vector3 up)
        {
            view = Matrix.LookAtLH(eye, target, up);
        }

        public Matrix Perspective
        {
            get { return perspective; }
        }

        public Matrix ViewPerspective
        {
            get { return view * perspective; }
        }

        float rotY = 0;

        public void rotateY(int value)
        {
            rotY = (value / 100.0f);
            Vector3 eyeLocal = eye - target;

            Matrix rotMat = Matrix.RotationY(rotY);
            eyeLocal = Vector3.TransformCoordinate(eyeLocal, rotMat);
            eye = eyeLocal + target;

            setView(eye, target, up);
        }
        float rotOrtho = 0;

        public void rotateOrtho(int value)
        {
            Vector3 viewDir = target - eye;
            Vector3 orhto = Vector3.Cross(viewDir, up);
            
            rotOrtho = (value / 100.0f);
            Matrix rotOrthoMat = Matrix.RotationAxis(orhto, rotOrtho);

            Vector3 eyeLocal = eye - target;
            eyeLocal = Vector3.TransformCoordinate(eyeLocal, rotOrthoMat);
            Vector3 newEye = eyeLocal + target;
            Vector3 newViewDir = target - newEye;
            float cosAngle = Vector3.Dot(newViewDir, up) / (newViewDir.Length() * up.Length());
            if (cosAngle < 0.999f && cosAngle > -0.999f)
            {
               eye = eyeLocal + target;
               setView(eye, target, up);
            }
        }

        public void panX(int value)
        {
            float scaleFactor = 0.0f;
            if (value > 1)
            {
                scaleFactor = -0.05f;
            }
            else if (value < -1 )
            {
                scaleFactor = 0.05f;
            }
            Vector3 viewDir = target - eye;
            Vector3 orhto = Vector3.Cross(viewDir, up);
            orhto.Normalize();
            scaleFactor = scaleFactor * (float)Math.Sqrt(viewDir.Length()) * 0.5f;
            Matrix scaling = Matrix.Scaling(scaleFactor, scaleFactor, scaleFactor);
            orhto = Vector3.TransformCoordinate(orhto, scaling);
            
            target = target + orhto;
            eye = eye + orhto;
            setView(eye, target, up);
        }

        public void panY(int value)
        {
            float scaleFactor = 0.00f;
            if (value > 1)
            {
                scaleFactor = -0.05f;
            }
            else if (value < -1 )
            {
                scaleFactor = 0.05f;
            }
            Vector3 viewDir = target - eye;
            scaleFactor = scaleFactor * (float)Math.Sqrt(viewDir.Length()) * 0.5f;
            viewDir.Y = 0.0f;
            viewDir.Normalize();
            Matrix scaling = Matrix.Scaling(scaleFactor, scaleFactor, scaleFactor);
            viewDir = Vector3.TransformCoordinate(viewDir, scaling);

            target = target + viewDir;
            eye = eye + viewDir;
            setView(eye, target, up);
        }


        float maxZoom = 3.0f;
        public void zoom(int value)
        {
            Vector3 viewDir = eye - target;

            float scaleFactor = 1.0f;
            if (value > 0)
            {
                scaleFactor = 1.1f;
            }
            else
            {
                if (viewDir.Length() > maxZoom)
                    scaleFactor = 0.9f;
            }

            Matrix scale = Matrix.Scaling(scaleFactor, scaleFactor, scaleFactor);
            viewDir.Normalize();
            viewDir = Vector3.TransformCoordinate(viewDir, scale);
            if (value > 0)
            {
                eye = eye + viewDir;
            }
            else
            {
                eye = eye - viewDir;
            }
            
            setView(eye, target, up);
        }
    }
}

The source code for the OrbitPanCamera is in large parts the same as for the OrbitCamera. New are the methods for translating in the x-direction (Method panX) and translating in the y-direction (Method panY) of the screen.
Lets take a look at the panX Method:

public void panX(int value)
{
  float scaleFactor = 0.0f;
  if (value > 1)
  {
    scaleFactor = -0.05f;
  }
  else if (value < -1)
  {
    scaleFactor = 0.05f;
  }
  Vector3 viewDir = target - eye;
  Vector3 orhto = Vector3.Cross(viewDir, up);
  orhto.Normalize();
  scaleFactor = scaleFactor * (float)Math.Sqrt(viewDir.Length()) * 0.5f;
  Matrix scaling = Matrix.Scaling(scaleFactor, scaleFactor, scaleFactor);
  orhto = Vector3.TransformCoordinate(orhto, scaling);

  target = target + orhto;
  eye = eye + orhto;
  setView(eye, target, up);
}
The pose (position and orientation) is determined by the three vectors eye, target and up. The eye vector holds the current position of the camera, the target vector is the point to look at and the up vector defines the up direction. In order to pan sidewards, the general idea is to translate the position of the camera and the target to look at simultaneously. Therefore we calculate the direction we are looking (viewDir) and calculate the cross product with the up vector, which results in a vector that is pointing orthogonal to the viewDir vector.
This orthogonal vector is added to the target and eye vectors and we create a new view matrix, by calling setView. I perform scaling of the orthogonal vector depending in the distance to the target, so that the translation is little, if the camera is next to the target and bigger if the camera is far away.

The implementation of the panY method in analogue:
public void panY(int value)
{
  float scaleFactor = 0.00f;
  if (value > 1)
  {
    scaleFactor = -0.05f;
  }
  else if (value < -1)
  {
    scaleFactor = 0.05f;
  }
  Vector3 viewDir = target - eye;
  scaleFactor = scaleFactor * (float)Math.Sqrt(viewDir.Length()) * 0.5f;
  viewDir.Y = 0.0f;
  viewDir.Normalize();
  Matrix scaling = Matrix.Scaling(scaleFactor, scaleFactor, scaleFactor);
  viewDir = Vector3.TransformCoordinate(viewDir, scaling);

  target = target + viewDir;
  eye = eye + viewDir;
  setView(eye, target, up);
}
This time we don't need the orthogonal vector but only the view direction of the camera. Again this vector is scaled corresposing to the distance to the target and added to the target vector and eye vector of the camera. Like above, the new view matrix is created with this new values.

Adapting the Mouse Event Handlers of the RenderControl

The only handlers we have to adapt are RenderControl_MouseMove and RenderControl_MouseWheel.
These are the handlers of the OrbitCamera:


private void RenderControl_MouseMove(object sender, MouseEventArgs e)
{
  if (dragging)
  {
    int currentX = e.X;
    deltaX = startX - currentX;
    startX = currentX;

    int currentY = e.Y;
    deltaY = startY - currentY;
    startY = currentY;

    if (e.Button == System.Windows.Forms.MouseButtons.Left)
    {
      OrbitCamera.Instance.rotateY(-deltaX);
      OrbitCamera.Instance.rotateOrtho(deltaY);
    }
  }
}

void RenderControl_MouseWheel(object sender, MouseEventArgs e)
{
  int delta = e.Delta;
  OrbitCamera.Instance.zoom(delta);
}
We have to update the references from OrbitCamera to OrbitPanCamera and call the methods for panning in the RenderControl_MouseMove handler. I will use the right mouse button for panning:

private void RenderControl_MouseMove(object sender, MouseEventArgs e)
{
  if (dragging)
  {
    int currentX = e.X;
    deltaX = startX - currentX;
    startX = currentX;

    int currentY = e.Y;
    deltaY = startY - currentY;
    startY = currentY;

    if (e.Button == System.Windows.Forms.MouseButtons.Left)
    {
      OrbitPanCamera.Instance.rotateY(-deltaX);
      OrbitPanCamera.Instance.rotateOrtho(deltaY);
    }
    else if (e.Button == System.Windows.Forms.MouseButtons.Right)
    {
      OrbitPanCamera.Instance.panX(deltaX);
      OrbitPanCamera.Instance.panY(deltaY);
    }
  }
}

void RenderControl_MouseWheel(object sender, MouseEventArgs e)
{
  int delta = e.Delta;
  OrbitPanCamera.Instance.zoom(delta);
}

Adapting the Renderables

We are not quite done yet, because we need the ViewPerspective matrix of our OrbitPanCamera to set the transformation in our Renderables. I will omit the code for this here, because it is just replacing the references to OrbitCamera to OrbitPanCamera in the Renderable classes.

In order to make the handling of cameras more flexible, I will introduce a CameraManager in the next tutorial, so we can have several cameras and do not need to hardcode the handling of mouse events and switching of cameras in the Renderables.

Result


You can download the source code here.

Thursday, March 21, 2013

Orbit Camera

Requirements

An Orbit Camera is a camera that orbits around a given point. We have to consider two angles: azimuth and pitch. Usually you rotate in a horizontal fashion (azimuth) and in vertical (pitch). When rotating around the poles of the resulting sphere the camera is moving on, it is not preferred to go over the poles. Therefore we lock the camera at the poles and prohibit walking over the poles.

Walking over a pole would result in two unwanted behaviours, depending on the implementation: if the up vector of the camera switches its sign when transgressing the pole, this would result in looking from upside-down at the  scene. If the up vector keeps its sign while transgressing a pole, the view rotates instantaneously by 180° around the vertical axis, when wandering over the pole, which is uncomfortable to watch.

Rotating around the vertical axis is accomplished by moving the mouse right and left. Moving the mouse up and down results in a motion of the camera around the horizontal axis. The mouse wheel is for zooming in and out.

Source Code for the Orbit Camera

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using SlimDX;
using SlimDX.Direct3D11;
using SlimDX.DXGI;

namespace Apparat
{
    public class OrbitCamera
    {
        #region Singleton Pattern
        private static OrbitCamera instance = null;
        public static OrbitCamera Instance
        {
            get
            {
                if (instance == null)
                {
                    instance = new OrbitCamera();
                }
                return instance;
            }
        }
        #endregion

        #region Constructor
        private OrbitCamera()
        {
            eye = new Vector3(4, 2, 0);
            target = new Vector3(0, 0, 0);
            up = new Vector3(0, 1, 0);

            view = Matrix.LookAtLH(eye, target, up);
            perspective = Matrix.PerspectiveFovLH((float)Math.PI / 4, 1.3f, 0.0f, 1.0f);
        }
        #endregion

        Vector3 eye;
        Vector3 target;
        Vector3 up;

        Matrix view = Matrix.Identity;
        Matrix perspective = Matrix.Identity;
        Matrix viewPerspective = Matrix.Identity;

        public Matrix View
        {
            get { return view; }
        }

        public void setPerspective(float fov, float aspect, float znear, float zfar)
        {
            perspective = Matrix.PerspectiveFovLH(fov, aspect, znear, zfar);
        }

        public void setView(Vector3 eye, Vector3 target, Vector3 up)
        {
            view = Matrix.LookAtLH(eye, target, up);
        }

        public Matrix Perspective
        {
            get { return perspective; }
        }

        public Matrix ViewPerspective
        {
            get { return view * perspective; }
        }

        float rotY = 0;

        public void rotateY(int value)
        {
            rotY = (value / 100.0f);
            Matrix rotMat = Matrix.RotationY(rotY);
            eye = Vector3.TransformCoordinate(eye, rotMat);
            setView(eye, target, up);
        }
        float rotOrtho = 0;

        public void rotateOrtho(int value)
        {
            Vector3 viewDir = target - eye;
            Vector3 orhto = Vector3.Cross(viewDir, up);

            rotOrtho = (value / 100.0f);
            Matrix rotOrthoMat = Matrix.RotationAxis(orhto, rotOrtho);

            Vector3 eyeLocal = eye - target;
            eyeLocal = Vector3.TransformCoordinate(eyeLocal, rotOrthoMat);
            Vector3 newEye = eyeLocal + target;
            Vector3 newViewDir = target - newEye;
            float cosAngle = Vector3.Dot(newViewDir, up) / (newViewDir.Length() * up.Length());
            if (cosAngle < 0.9f && cosAngle > -0.9f)
            {
                eye = eyeLocal + target;
                setView(eye, target, up);
            }
        }


        float maxZoom = 3.0f;
        public void zoom(int value)
        {
            float scaleFactor = 1.0f;
            if (value > 0)
            {
                scaleFactor = 1.1f;
            }
            else
            {
                if ((eye - target).Length() > maxZoom)
                    scaleFactor = 0.9f;
            }

            Matrix scale = Matrix.Scaling(scaleFactor, scaleFactor, scaleFactor);
            eye = Vector3.TransformCoordinate(eye, scale);
            setView(eye, target, up);
        }
    }
}


The pose of the camera is defined by three vectors: up, eye and target. Up is a direction vector, that defines the up direction. Eye is the position of the camera and target is the position to look at. These vectors are set in the constructor of this class and are needed to create the look-at matrix, which we call view.

Here is the reference to the look at matrix:
http://slimdx.org/docs/html/M_SlimDX_Matrix_LookAtLH.htm
Every time, we change one of the three vectors up, eye or target, we call the setView method, in which we create a new view matrix.

Next we create the perspective matrix.
Reference: http://slimdx.org/docs/html/M_SlimDX_Matrix_PerspectiveFovLH.htm

The method rotateY is called, when moving the mouse left or right and preform a rotation around the y-axis.

The method rotateOrtho is called, when moving the mouse up or down. This method is named rotateOrtho, because the axis of rotation is orthogonal to up vector and the direction vector from the eye to the target. Here we also prevent the camera to transit the poles.

The zoom method is called, when using the mouse wheel.

Adapt the RenderControl

In order to control the camera with the mouse, we need to interact with the RenderControl. In order to do so, I handle four events:
  • MouseUp
  • MouseDown
  • MouseMove
  • MouseWheel

using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Drawing;
using System.Data;
using System.Linq;
using System.Text;
using System.Windows.Forms;
using Apparat.Renderables;

namespace Apparat
{
    public partial class RenderControl : UserControl
    {
        public RenderControl()
        {
            InitializeComponent();
            this.MouseWheel += new MouseEventHandler(RenderControl_MouseWheel);
        }

        public void init()
        {
            DeviceManager.Instance.createDeviceAndSwapChain(this);
            RenderManager.Instance.init();

            Grid grid = new Grid(10, 1.0f);
            TriangleEF triangle = new TriangleEF();
            Scene.Instance.addRenderObject(triangle);
            Scene.Instance.addRenderObject(grid);
        }

        public void shutDown()
        {
            RenderManager.Instance.shutDown();
            DeviceManager.Instance.shutDown();
        }

        public bool dragging = false;
        int startX = 0;
        int deltaX = 0;

        int startY = 0;
        int deltaY = 0;

        private void RenderControl_MouseUp(object sender, MouseEventArgs e)
        {
            dragging = false;
        }

        private void RenderControl_MouseDown(object sender, MouseEventArgs e)
        {
            dragging = true;
            startX = e.X;
            startY = e.Y;
        }

        private void RenderControl_MouseMove(object sender, MouseEventArgs e)
        {
            if (dragging)
            {
                int currentX = e.X;
                deltaX = startX - currentX;
                startX = currentX;

                int currentY = e.Y;
                deltaY = startY - currentY;
                startY = currentY;

                if (e.Button == System.Windows.Forms.MouseButtons.Left)
                {
                    OrbitCamera.Instance.rotateY(-deltaX);
                    OrbitCamera.Instance.rotateOrtho(deltaY);
                }
            }
        }

        void RenderControl_MouseWheel(object sender, MouseEventArgs e)
        {
            int delta = e.Delta;
            OrbitCamera.Instance.zoom(delta);
        }
    }
}

Adapt Renderables

The camera has a method called ViewPerspective, which return the result of a multiplication of the cameras view matrix with its perspective matrix. To set the according transformation in the renderables, this method has to be called in the render method of the renderables, e.g. the grid renderable:

public override void render()
{
  Matrix ViewPerspective = OrbitCamera.Instance.ViewPerspective;
  tmat.SetMatrix(ViewPerspective);

  // configure the Input Assembler portion of the pipeline with the vertex data
  DeviceManager.Instance.context.InputAssembler.InputLayout = layout;
  DeviceManager.Instance.context.InputAssembler.PrimitiveTopology = PrimitiveTopology.LineList;
  DeviceManager.Instance.context.InputAssembler.SetVertexBuffers(0, new VertexBufferBinding(vertexBuffer, 12, 0));

  technique = effect.GetTechniqueByName("Render");

  EffectTechniqueDescription techDesc;
  techDesc = technique.Description;

  for (int p = 0; p < techDesc.PassCount; ++p)
  {
    technique.GetPassByIndex(p).Apply(DeviceManager.Instance.context);
    DeviceManager.Instance.context.Draw(numVertices, 0);
  }
}

Here we get the ViewPerspective matrix from the camera and pass it to the Effect variable. The results can be seen in the videos below.

Results

This video shows how the camera rotates around the center of the global coordinate system. Because the transformations of the triangle were not adjusted to the transformation from the camera, the triangle still rotates in the middle of the window.

This video was made, after the transformation of the triangle was adapted. Now the triangle is stationary. Because the triangle is culled just from one side, it is invisible, if the camera is looking at it from the other side.

You can download the source code to this tutorial here.

Tuesday, March 19, 2013

Rendering a Grid with the LineList Primitive

In the next tutorials I am going to integrate a class for a camera. In order to have an orientation where we are going with the camera it is common to render a grid as reference.

Source Code of the Grid Renderable


using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using SlimDX.D3DCompiler;
using SlimDX;
using SlimDX.Direct3D11;
using SlimDX.DXGI;

namespace Apparat.Renderables
{
    public class Grid : Renderable
    {
        SlimDX.Direct3D11.Buffer vertexBuffer;
        DataStream vertices;
        
        InputLayout layout;

        int numVertices = 0;

        ShaderSignature inputSignature;
        EffectTechnique technique;
        EffectPass pass;

        Effect effect;
        EffectMatrixVariable tmat;


        public Grid(int cellsPerSide, float cellSize)
        {
            try
            {
                using (ShaderBytecode effectByteCode = ShaderBytecode.CompileFromFile(
                    "transformEffectRasterizer.fx",
                    "Render",
                    "fx_5_0",
                    ShaderFlags.EnableStrictness,
                    EffectFlags.None))
                {
                    effect = new Effect(DeviceManager.Instance.device, effectByteCode);
                    technique = effect.GetTechniqueByIndex(0);
                    pass = technique.GetPassByIndex(0);
                    inputSignature = pass.Description.Signature;
                }
            }
            catch (Exception ex)
            {
                Console.WriteLine(ex.ToString());
            }

            tmat = effect.GetVariableByName("gWVP").AsMatrix();
          

            int numLines = cellsPerSide+1;
            float lineLength = cellsPerSide * cellSize;

            float xStart = -lineLength / 2.0f;
            float yStart = -lineLength / 2.0f;

            float xCurrent = xStart;
            float yCurrent = yStart;

            numVertices = 2 * 2 * numLines;
            int SizeInBytes = 12 * numVertices;

            vertices = new DataStream(SizeInBytes, true, true);

            for (int y = 0; y < numLines; y++)
            {
                vertices.Write(new Vector3(xCurrent, 0, yStart));
                vertices.Write(new Vector3(xCurrent, 0, yStart + lineLength));
                xCurrent += cellSize;
            }

            for (int x = 0; x < numLines; x++)
            {
                vertices.Write(new Vector3(xStart, 0, yCurrent));
                vertices.Write(new Vector3(xStart + lineLength, 0, yCurrent));
                yCurrent += cellSize;
            }

            vertices.Position = 0;

            // create the vertex layout and buffer
            var elements = new[] { new InputElement("POSITION", 0, Format.R32G32B32_Float, 0) };
            layout = new InputLayout(DeviceManager.Instance.device, inputSignature, elements);
            vertexBuffer = new SlimDX.Direct3D11.Buffer(DeviceManager.Instance.device, vertices, SizeInBytes, ResourceUsage.Default, BindFlags.VertexBuffer, CpuAccessFlags.None, ResourceOptionFlags.None, 0);
            

        }

        public override void render()
        {
            Matrix ViewPerspective = Matrix.Identity;

            tmat.SetMatrix(ViewPerspective);

            // configure the Input Assembler portion of the pipeline with the vertex data
            DeviceManager.Instance.context.InputAssembler.InputLayout = layout;
            DeviceManager.Instance.context.InputAssembler.PrimitiveTopology = PrimitiveTopology.LineList;
            DeviceManager.Instance.context.InputAssembler.SetVertexBuffers(0, new VertexBufferBinding(vertexBuffer, 12, 0));

            technique = effect.GetTechniqueByName("Render");

            EffectTechniqueDescription techDesc;
            techDesc = technique.Description;

            for (int p = 0; p < techDesc.PassCount; ++p)
            {
                technique.GetPassByIndex(p).Apply(DeviceManager.Instance.context);
                DeviceManager.Instance.context.Draw(numVertices, 0);
            }
            
        }

        public override void dispose()
        {
            inputSignature.Dispose();
        }
    }
}

The constructor of the grid takes the number of cells per side and the cell size as arguments. In the constructor the vertices of the grid are created. I have arranged the grid in a way, that the center of the grid
corresponds with the origin of the grids local coordinate system. If you compare this code to the code for the  triangle used in the previous tutorial, only the creation of the vertices differs.

The Render Method

This is the render method of the triangle of the last tutorial. The transformation matrix of the triangle is set via the Effect Framework.

public override void render()
{
  rot += 0.01f;
  rotMat = Matrix.RotationY(rot);
  tmat.SetMatrix(Matrix.Transpose(rotMat));

  // configure the Input Assembler portion of the pipeline with the vertex data
  DeviceManager.Instance.context.InputAssembler.InputLayout = layout;
  DeviceManager.Instance.context.InputAssembler.PrimitiveTopology = PrimitiveTopology.TriangleList;
  DeviceManager.Instance.context.InputAssembler.SetVertexBuffers(0, new VertexBufferBinding(vertexBuffer, 12, 0));

  technique = effect.GetTechniqueByName("Render");

  EffectTechniqueDescription techDesc;
  techDesc = technique.Description;

  for (int p = 0; p < techDesc.PassCount; ++p)
  {
    technique.GetPassByIndex(p).Apply(DeviceManager.Instance.context);
    DeviceManager.Instance.context.Draw(3, 0);
  }
}

Compare this to the render method of the grid:

public override void render()
{
  Matrix ViewPerspective = Matrix.Identity;

  tmat.SetMatrix(ViewPerspective);

  // configure the Input Assembler portion of the pipeline with the vertex data
  DeviceManager.Instance.context.InputAssembler.InputLayout = layout;
  DeviceManager.Instance.context.InputAssembler.PrimitiveTopology = PrimitiveTopology.LineList;
  DeviceManager.Instance.context.InputAssembler.SetVertexBuffers(0, new VertexBufferBinding(vertexBuffer, 12, 0));

  technique = effect.GetTechniqueByName("Render");

  EffectTechniqueDescription techDesc;
  techDesc = technique.Description;

  for (int p = 0; p < techDesc.PassCount; ++p)
  {
    technique.GetPassByIndex(p).Apply(DeviceManager.Instance.context);
    DeviceManager.Instance.context.Draw(numVertices, 0);
  }
}
Here the transformation matrix, called ViewPerspective is set to the Identity Matrix, resulting in no transformation. This is the point, where the transformation from the camera will come into play in the next tutorial.

While we could call in the triangle class the Draw method with 3 vertices, we have to use for the grid class a variable called numVertices, as the vertices of the lines are created in the constructor and depend on the number of cells of our grid.

The next thing to note is that the primitive typology for the triangle was PrimitiveTopology.TriangleList and in the primitive typology for the grid is PrimitiveTopology.LineList.

The reference to the SlimDX Primitive Topology Enumeration is here:
http://slimdx.org/docs/html/T_SlimDX_Direct3D11_PrimitiveTopology.htm

The most common primitives are:

  • PointList
  • LineList
  • LineStrip
  • TriangleList
  • TriangleStrip
We have used the LineList and the TriangleList so far.

Result

So far, we get the following picture, when compiling and executing the code:


The result is quite sobering, as we just see an additional line in the center of the window. This is because, the view is aligned with the horizontal plane and we see the grid from the side.
You can try to rotate the grid programmatically like in the triangle class. Hint:  Matrix.RotationX(float angle) is your friend.
In the next tutorial I will introduce an Orbit Camera, that allows you to zoom and rotate around the origin of the global coordinate system.

You can download the source code to this tutorial here.

Setting Transformations in a Shader with the Effect Framework

In the previous tutorial I showed how to set a transformation in a shader via a Contant Buffer, resulting
in a rotating triangle. In this tutorial I will implement the same functionality with the Effect Framework.

Triangle Source Code


using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using SlimDX.D3DCompiler;
using SlimDX;
using SlimDX.Direct3D11;
using SlimDX.DXGI;

namespace Apparat.Renderables
{
    public class TriangleEF : Renderable
    {
        ShaderSignature inputSignature;
        EffectTechnique technique;
        EffectPass pass;

        Effect effect;

        InputLayout layout;
        SlimDX.Direct3D11.Buffer vertexBuffer;

        EffectMatrixVariable tmat;

        public TriangleEF()
        {
            try
            {
                using (ShaderBytecode effectByteCode = ShaderBytecode.CompileFromFile(
                    "transformEffect.fx",
                    "Render",
                    "fx_5_0",
                    ShaderFlags.EnableStrictness,
                    EffectFlags.None))
                {
                    effect = new Effect(DeviceManager.Instance.device, effectByteCode);
                    technique = effect.GetTechniqueByIndex(0);
                    pass = technique.GetPassByIndex(0);
                    inputSignature = pass.Description.Signature;
                }
            }
            catch (Exception ex)
            {
                Console.WriteLine(ex.ToString());
            }

            tmat = effect.GetVariableByName("gWVP").AsMatrix();

            // create test vertex data, making sure to rewind the stream afterward
            var vertices = new DataStream(12 * 3, true, true);
            vertices.Write(new Vector3(0.0f, 0.5f, 0.5f));
            vertices.Write(new Vector3(0.5f, -0.5f, 0.5f));
            vertices.Write(new Vector3(-0.5f, -0.5f, 0.5f));
            vertices.Position = 0;

            // create the vertex layout and buffer
            var elements = new[] { new InputElement("POSITION", 0, Format.R32G32B32_Float, 0) };
            layout = new InputLayout(DeviceManager.Instance.device, inputSignature, elements);
            vertexBuffer = new SlimDX.Direct3D11.Buffer(
                DeviceManager.Instance.device,
                vertices,
                12 * 3,
                ResourceUsage.Default,
                BindFlags.VertexBuffer,
                CpuAccessFlags.None,
                ResourceOptionFlags.None,
                0);
        }

        public override void dispose()
        {
            effect.Dispose();
            inputSignature.Dispose();
            vertexBuffer.Dispose();
            layout.Dispose();
        }

        float rot = 0.0f;
        Matrix rotMat;

        public override void render()
        {
            rot += 0.01f;
            rotMat = Matrix.RotationY(rot);
            tmat.SetMatrix(rotMat);
           
            // configure the Input Assembler portion of the pipeline with the vertex data
            DeviceManager.Instance.context.InputAssembler.InputLayout = layout;
            DeviceManager.Instance.context.InputAssembler.PrimitiveTopology = PrimitiveTopology.TriangleList;
            DeviceManager.Instance.context.InputAssembler.SetVertexBuffers(0, new VertexBufferBinding(vertexBuffer, 12, 0));
            
            technique = effect.GetTechniqueByName("Render");

            EffectTechniqueDescription techDesc;
            techDesc = technique.Description;

            for (int p = 0; p < techDesc.PassCount; ++p)
            {
                technique.GetPassByIndex(p).Apply(DeviceManager.Instance.context);
                DeviceManager.Instance.context.Draw(3, 0);
            }
        }
    }
}

Shader Source Code


matrix gWVP;

float4 VShader(float4 position : POSITION) : SV_POSITION
{
 return mul( position, gWVP);
}

float4 PShader(float4 position : SV_POSITION) : SV_Target
{
 return float4(0.0f, 0.0f, 1.0f, 1.0f);
}

technique10 Render
{
 pass P0
 {
  SetVertexShader( CompileShader( vs_4_0, VShader() ));
  SetGeometryShader( NULL );
  SetPixelShader( CompileShader( ps_4_0, PShader() ));
 }
}

Explaining the Shader Source Code

In contrast to the previous shader in the last tutorial I do not declare the matrix variable gWVP in a
ConstantBuffer but directly as a matrix.

Also. when using the Effect Framework you have to define a Technique with at least one Pass:


technique10 Render
{
 pass P0
 {
  SetVertexShader( CompileShader( vs_4_0, VShader() ));
  SetGeometryShader( NULL );
  SetPixelShader( CompileShader( ps_4_0, PShader() ));
 }
}

The Technique is your interface to your shader from your code and in the Pass the shaders are set. I will explain in the next section how to interface with your shader.

Explaining the Triangle Source Code

In the last tutorial you had to load the VertexShader and the PixelShader seperately. With the Effect Framework you just have to load the ShaderBytecode for the effect:

try
{
  using (ShaderBytecode effectByteCode = ShaderBytecode.CompileFromFile(
    "transformEffect.fx",
    "Render",
    "fx_5_0",
    ShaderFlags.EnableStrictness,
    EffectFlags.None))
  {
    effect = new Effect(DeviceManager.Instance.device, effectByteCode);
    technique = effect.GetTechniqueByIndex(0);
    pass = technique.GetPassByIndex(0);
    inputSignature = pass.Description.Signature;
  }
}
catch (Exception ex)
{
  Console.WriteLine(ex.ToString());
}

tmat = effect.GetVariableByName("gWVP").AsMatrix();

When compiling the ShaderBytecode you have to set the name of the Technique as a parameter, in this case "Render". You have to create the effect by calling the constructor of the Effect Class with the device and the ShaderBytecode as parameters. Also you have access the technique and pass to get the InputSignature of the shader.

In order to access the matrix variable in your shader, you have to use the getVariableByName function of the effect. The matrix from the effect file is asigned to a matrix called tmat, which is of the type EffectMatrixVariable. We will use the variable tmat again in the render function of the triangle to set the transformation of the triangle:

public override void render()
{
  rot += 0.01f;
  rotMat = Matrix.RotationY(rot);
  tmat.SetMatrix(rotMat);
           
  // configure the Input Assembler portion of the pipeline with the vertex data
  DeviceManager.Instance.context.InputAssembler.InputLayout = layout;
  DeviceManager.Instance.context.InputAssembler.PrimitiveTopology = PrimitiveTopology.TriangleList;
  DeviceManager.Instance.context.InputAssembler.SetVertexBuffers(0, new VertexBufferBinding(vertexBuffer, 12, 0));
            
  technique = effect.GetTechniqueByName("Render");

  EffectTechniqueDescription techDesc;
  techDesc = technique.Description;

  for (int p = 0; p < techDesc.PassCount; ++p)
  {
    technique.GetPassByIndex(p).Apply(DeviceManager.Instance.context);
    DeviceManager.Instance.context.Draw(3, 0);
  }
}

The statement tmat.SetMatrix(rotMat) sets the variable for the transformation matrix in the effect.

Now we get a rotating triangle again:


Observe, that the triangle is rotating counter-clockwise, while the triangle in the previous tutorial was rotating clockwise. As far as i know, DirectX uses per default a left-handed coordinate system and the triangle should rotate clockwise with positively growing values for rotation around the y-axis. This can be resolved by transposing the matrix before passing it to the Effect Framework: tmat.SetMatrix(Matrix.Transpose(rotMat)) and the triangle is rotating clockwise again.

You can download the source code here.