Monday, 14 July 2014

The Importance of Nothing

Just a quick tip for this article.

I have a lot of layers in my Unity Mecanim Controller which I turn on and off as needed by setting the weight to one and then usually fade that back out to zero when no longer needed.  The layers are typically for upper body only movements like reloading or throwing.

I had struggled with unexpected jerky starts when the layer was enabled.

I eventually realised that there is a mandatory default animation on each layer.  In the layers where it was useful to transition from Any State, the behaviour of the controller is to immediately start the default state, which causes the jump and then start the transition from that incorrect state to the desired state.



Armed with that information I simply added a default state that does not have an animation.  It does nothing and has no transitions but that fixed it.

Wednesday, 25 June 2014

Mecanim Animations

I have been working on moving about the game and getting the animations in place for the actions I will need.

The advantage of Unity is there are lots of pre-produced kits to use.  Some free but most have a small amount to pay.  I have been working on the in-game movement.  I have tried lots of the free packages, used most of the tutorials and purchased a lot of animations at modest prices.

I have found that the Unity Mecanim system is very powerful.  It has some great advantages, like being able to re-task animations from one model to another.  Like any complex system, getting the best out of Mecanim is still time consuming.

None of the assets I tried or purchased had a control system or set of animations that completely suited how I wanted Diabolical to behave.  I have therefore created the input controller and the Mecanim controller almost from scratch.

I created myself a test environment using bits and pieces from the assets I purchased and some of the tutorials.  This made it easy to test changes.

Some of the best animations I purchased were designed for root motion as are most of the latest Unity tech demos.  Root motion is where the animation moves the character within the world.  This leads to a much better looking result.  The trouble I found was that I could not get it to feel like a first person game.  It felt like watching a third person.  For example, instead of the animation moving as quickly as the player input round a corner, there was a nice looking smooth curve but not quite what the player controls were doing.

Diabolical is an over the shoulder game, so although the view is third person the controls should feel like a first person game.


If I had more time or lots of people to work on it, I am sure root motion can be used to produce a good feeling and good looking first person solution, unfortunately I have neither the time nor the people, it's just me.  I have opted for the more traditional route of the input directly moving the player entity and the animations doing a best approximation of the movement to get there.

You end up with feet sliding and perhaps the odd lag in the animation but to me the movement feels better from a players perspective.

It's taken a few weeks and I've had a few frustrating days in that time but I am now happy with the result.


I have more and better animations than I had in the XNA version.  In Unity I already have rifle and pistol aiming up and down, walking, running, jumping, shooting and reloading.  I just need to add throwing grenades and I am done with this bit.

Monday, 12 May 2014

Blender to Mixamo to Blender to Unity

As with all 3D modelling one of the biggest problems is transferring models between applications and still retain all the details.

The following is what works for me to get a mesh modeled in Blender 2.70a in to Mixamo to create the rig and then back in to Blender and on in to Unity 4.3 still in a state to animate.

You may not need to get your model back in to Blender but I need to add some simple animations not available from Mixamo yet.

Blender to Mixamo

Export from Blender using the FBX exporter that ships with Blender 2.70a.



You only need the mesh but the settings matter little.  Mixamo appears to sort out most things.  I used the defaults of -Z Forward and +Y Up.  I do change the Path Mode to 'Strip Path' so the FBX expects the texture file in the same folder as the FBX file.

Compress the FBX file and the texture file in to a zip file and upload just that Zip file in to the Rig upload of Mixamo.


It took me several variations of file types before I worked out the above that worked.  Even the zip file was a problem.  For some reason a zip file created with 7zip did not work but a zip created with WinRar did.

Once in Mixamo the Auto-Rigging worked very well.  Mixamo does a much better job of weight painting than I can manage.

The next trouble was exporting in a format that I could read back in to Blender.

Mixamo To Blender

I found the standard Collada worked with Blender 2.70a, not the version for use with the older version of Blender.



I added the T-pose animation and downloaded it as a zipped Collada file.

I then imported that in to Blender using the Collada (Default) importer shipped with version 2.70a of Blender.



I never understand why most importers change the scale to 0.01.

I simply change the scale for all three axes back to 1.0 and the model and the armature are as I expect.


I downloaded a version with an Idle animation and that worked as well.

Blender to Unity

Blender to Unity poses few problems.  Export to FBX and drag in to Unity.



Again I use the standard settings in Blender, I only include the Armature and Mesh and change the Path Mode to 'Strip Path'.

In Unity, drag the FBX file exported from Blender in to a folder in Unity.


The model imports face down but the armature and animation will make the model face the correct way up without having to rotate the imported mesh.



Like other importers Unity also imports at a scale of 0.01 so it needs to be changed to a scale of 1.0.



If necessary drag the texture in and create a material using that then drag the material in to the mesh renderer for the model.


Prepare to Animate

To use any of the animations it will be necessary to configure the Avatar Rig in Unity.



Select the model and press the Rig button.

Change the animation type to Humanoid and press Apply.



The chances are it will work it all out itself but if necessary you can press the configure button.



At this point you should be able to use Mecanim animations on the imported model.  Job done.

Friday, 9 May 2014

Head Texture UV Unwrap

I've been working on the controller and animations and this has led me back to character creation.

I have tried creating my own animations and although acceptable they are not as good as those I can purchase for a modest fee for use with the Unity3D Mecanim system.

In the process of trying out the purchased and a few free bundled animations with my own models I have found some shortcomings with my skinning.  To fix that I have tried Mixamo.  It did a fantastic job of adding a rig and skinning my character.

The test animations also showed where my mesh needed improvement.  Before I create a final skinned mesh using Mixamo I decided I should fix the mesh and that led me eventually to recreating the UV map for the characters head.

Texturing a Head

I have never textured a human head before so to remind me of the technique I used I decided to put it down in this blog.

There are loads of tutorials out there for UV mapping and texturing but the two I found most useful are:
For UV unwrapping: http://bgdm.katorlegaz.com/lscm_tute/lscm_tute.htm
For creating the texture for the head: http://www.3dm3.com/tutorials/maya/texturing/
This one also has a fully body tutorial but was also helpful: http://cgi.tutsplus.com/articles/game-character-creation-series-kila-chapter-3-uv-mapping--cg-26754

I am not going to go in to detail because those tutorials do that.  I am just going to mention the bits I wanted answers to.  The only thing to mention about the tutorials above is that the version of Blender mentioned is an older version. For me the standard Unwrap method created the head UV in one go without any adjustment needed.




Where to create seams?
I decided:
- From just above the hairline back over the head to the back of the neck.
- From under the chin down the centre of the neck at the front
- Across the mouth
- The eyes should be separate and therefore already a hollow cutout

I tried a horizontal cut on the forehead but I found this made it difficult to add hair without a visible line.




Do I separate the ears?
I had seen some examples with the ears being removed and dealt with separately.  I tried that and found the joint was visible.  For me it works best if I keep the ears attached.

Can I use a downloaded head texture created for another model?
No.  I tried adjusting the UV to fit a texture and it did not work.  It was stretched and all wrong in places.  What worked for me was to create the texture to fit the UV map as generated by Blender.  The generated UV has done the maths to minimise stretching.

Basic Technique

Start by exporting the UV map from Blender.  You won't get very far unless you know what you are trying to line the texture up with.  The exported UV image gives you a semi-transparent texture with lines on to use as a layer in your photo editor.



Creating the finished texture is also all about layers.

I started with a background of skin colour.  This was created using photo of a face.  Using the clone tool in Gimp I extended the flesh in all directions until I had a rectangle of flesh.  The important bit is that it retained tonal variation, so it still looked like skin.

The result was still very rough with duplicated wrinkles and blemishes.  I used various blurs and finished with a giant size smudge brush with an opacity of about 60% to end up with an even result.


That skin is the base layer over which everything else is added.

The next layer I created was for the eyes and nose taken from a different photo of a face.  I feathered the edge by using a large erase tool with a faded edge.  It was deliberately a freehand rough cutout so the blend was more natural.  I kept the eye brows as well.

I repeatedly scaled the layer until the eyes lined up with the UV map.  I then scaled vertically to get the nose the right length.

I used small clone and smudge brushes with blurred edges to make the area round the eyes redder and remove any overlap of the eye in the photo with the UV map.  I then filled in the eyes with the same colour and smoothed the result using the smudge brush.

I smudged out the nostrils from the texture because my model has geometry to form those and it looks odd if the texture does not align with the geometry of the nostrils.

I added a layer for the mouth and adjusted the scale to fit the UV map.

The same with the layer for the ears.



For all the added layers above the skin I set the opacity to 50% so they all blended nicely.



At this point I decided to try it on the model and I am pleased with the results.



I added blemishes and the hair the same way using additional layers.



In my opinion hair looks best as a separate mesh made of strips so the hair in this texture is just a placeholder.

Now that is done I can add the rig and skinning information using Mixamo.


Sunday, 27 April 2014

Avoid Photon OnDestroy Warnings

I like my code to compile and run without errors.  The Photon Unity Networking (PUN) code did not.

When exiting scenes I would get the following errors for each instantiated object in the level:

OnDestroy for PhotonView View (0)1001 on CharacterPlaceholder(Clone)  but GO is still in instantiatedObjects. instantiationId: 1001. Use PhotonNetwork.Destroy().

Failed to 'network-remove' GameObject because it's null.


I searched the Internet and Exit Games forums but found little help that actually solved the problem. I found plenty of other people reporting the same issue.



I tried reading the PUN documentation but that was unhelpful as were any of the samples that I tried so I experimented.



I now have a work round which avoids the warnings and errors.



/// Tidies up before exit from the scene.
/// Attempts to destroy network objects correctly.
/// Pauses the messages while the scene changes.
private static void TidyUpBeforeExit()
{
  // Destroy any photonView game objects.
  PhotonView[] views = 
          GameObject.FindObjectsOfType<photonview>();

  // Whichever client exits first cannot 
  // destroy the objects owned by
  // another client.
  foreach (PhotonView view in views)
  {
    GameObject obj = view.gameObject;

    if (view.isMine)
    {
      // Clients can destroy their 
      // own instantiated objects
      PhotonNetwork.Destroy (obj);
    }
    else
    {
      // Clients cannot destroy objects 
      // instantiated by other clients.
      // This avoids the error by 
      // removing the instantiated 
      // object before the network view 
      // is destroyed
      if (PhotonNetwork.
            networkingPeer.
            instantiatedObjects.
            ContainsKey(view.instantiationId))
      {
        PhotonNetwork.
            networkingPeer.
            instantiatedObjects.
            Remove (view.instantiationId);
      }
    }
  }

  // By pausing and starting the 
  // messages manually I can start
  // any client or server in any 
  // order and all spawn messages
  // are retained.
  PhotonNetwork.SetSendingEnabled(0, false);
  PhotonNetwork.isMessageQueueRunning = false;
}



I simply include that method before any scene change.

Thursday, 24 April 2014

Manual vs Automation

I spent several hours trying to use what I was expecting to be better methods to control spawning character when the level changes in a network game.  I'm using the Photon Unity Networks (PUN) methods.

According to the following forum post the automated methods should handle pausing messages when the scene changes.
http://forum.exitgames.com/viewtopic.php?f=17&t=2575

PhotonNetwork.automaticallySyncScene = true
PhotonNetwork.LoadLevel(...)


As far as I can tell, they work but the trouble with automation is that it is only useful if the person designing the automation was thinking the same way as you are.  In this case, Exit Games were not.


I have a scene selection popup and a lobby scene before the level loads.


The server and the client can sit at that lobby and join the game when they are ready.

By manually controlling the network message flow I can allow the server and the client to join in any order.   If I use the automated methods the server must be in the game scene before any of the clients!

PhotonNetwork.SetSendingEnabled(0, false);
PhotonNetwork.isMessageQueueRunning = false;

Obviously I prefer my method.  I simple disable the messages before loading a level and start them again only when the scene has started and before the spawn message is sent.

The only reason I tried to change to the automated methods was because I was trying to get rid of some annoying PhotonNetworks warning and error log messages whenever I exit a scene.

OnDestroy for PhotonView View (0)1001 on CharacterPlaceholder(Clone)  but GO is still in instantiatedObjects. instantiationId: 1001. Use PhotonNetwork.Destroy().

Failed to 'network-remove' GameObject because it's null.


The automated methods did not fix those either!

Back to finding a fix for the errors!

Friday, 18 April 2014

Unity Gizmos

I've been using Unity 3D for months now and I somehow missed Gizmos as a concept.  I had seen the camera and speaker icons in the editor view but I think I had just assumed they were built in symbols used for built in objects and I had a blind spot towards them.

I now know better.  You can easily create your own icons which can be included in the scene.  I started using them when I wanted to make spawn points visible in the scene.



It is very easy to attach your icon to any object in a scene and that icon displays over the object.




That works well if you want to mark otherwise invisible points in your scene.  An example of this might be waypoints for AI controlled characters.

I prefer to create the Gizmos in code.  This adds them to the Gizmos list in the game view.



Creating the Icon

This can be done in any image editor and just needs whatever shape you want with a transparent background.

The image can be any size if you intend to use the default automatic scaling method.  For good quality I use anything between 512 and 1024 pixels high or wide images.  If you don't use the automatic scaling the image in the 3D scene, at those sizes, is quite small.

I used Inkscape to create the following image.



The important bit to know before you can use the image in your code is that it must be in the special Gizmos folder in Unity.

There can only be one Gizmos folder and it must be in the root of the Assets folder.



If you try to put your icon anywhere else, it will not work as a Gizmo in code.

Using the Icon for Your Object

Well if you can write any Unity code this is the simplest possible.  Just add the following in the code for your object or create a small script with this in.



/// http://docs.unity3d.com/
///        Documentation/ScriptReference/Gizmos.html
void OnDrawGizmos()
{
    // The icon image must be in the 
    // special folder called 'Gizmos'
    // There can only be one Gizmos 
    // folder in the root of Assets.
    Gizmos.DrawIcon (transform.position, 
                     "PersonIcon.png", 
                     true);
}



That's it, you now have a Gizmo shown wherever you position that object.