[Internship] Developing a video game for Kinect with Unity3D

This work was carried out during the internship in the Gaming Service  of the Company Peaksource. Our project is a game that involves both Advergaming and Motion Gaming.

Having set the goal to develop a game both intuitive and fun, we ended up adopting the concept of the game show “Hole In The Wall” (entitled “Human Tetris” in Japan).

It is to pass through different crazy shapes cut into a wall moving towards the player.

hole-in-the-wall-kinect18

Just as the player moves in front of the Kinect sensor that he start controlling the character in the game. As a contest game, a scoring system is necessary to rank the players.

Getting Started

The first step of our work was to prepare models of the scene and integrate them into the game engine.

1. Preparing models

1.1. Modeling

The main scene of the game is composed of several 3D models. For the surround environment , we have used models with a free license from specialized sites such as TF3DM.com and archive3d.com: Models pool, water slides, chairs, umbrellas …

10

The rest of the models has been created, either directly in Unity or using Inkscape and Blender (for creating clipped walls).

We started by drawing in Inkscape, the wall’s shape cut in 2D.

Then we vectorized image (with path option> vectorize the bitmap) and exported as SVG (Scalable Vector Graphics) which  is a data format based on XML and designed to describe sets of vector graphics .

9

We imported subsequently the SVG file in Blender, rotated and the object along the y axis, and changed the Extrude value in 0002 under the Data Object menu. We have thus created a 3d object as a carved up wall, but it lacks the texture.

8

1.2. UV Mapping

UV Mapping on a simple idea: to fold flat 3D models and then work on more easily their 2D textures. Fortunately, Blender facilitates this process and presents the possibility of projecting the faces of our object on a flat surface, the result is the set of UVs. We subsequently exported these UVs as an image (Png) to serve as texture in Unity. 

7

The last step is to save the 3d object file (. Blend) in Blender.

1.3. Integration in Unity

We imported all the 3d objects to the asset manager, then we created instances in the scene, and change their properties in order to give a unique and personalized look to the game’s environment.

5

2. Implementations of gameplay

Implementing and debugging scripts is our main work. After joining the graphs and models in the game, we coded their behavior towards our gameplay specifications in C# scripts.

Given the large number of tasks achieved, the following description will not detail all the aspects but only the most pertinent ones.

2.1. Rigging

The rigging is a process in 3D computer graphics which endows an object to animate a skeleton deep that distort its mobile mesh surface. We used the asset Biped Body which is in charge of rigging the character according to the name of objects that forms the character.

4

2.2. Integrating Kinect SDK

3

To integrate the Kinect development API in unity, we used the framework Zigfu that uses basically OpenNi/Nite librarie, and modified the source code to optimize body tracking and to meet our gameplay needs. The ZigSkeleton script do the mapping between the skeleton tracked by Kinect and the skeleton of the character in game.

2.3. Collision detection

We need to accurately detect collisions between the character and all the walls cut in order to count the score .

Collision detection can be done by the method OnTriggerEnter() and OnCollisionEnter(). These methods are called when an object with a Rigidbody component (used for physical behavior) has its collider running into another collider. Thus, every object (walls / character) has its collision script , a collider and a rigidbody.

3. Path rendering and performance

For each camera, Unity offers 3 ways for rendering: vertex-lit, forward, and deferred shading. Vertex-lit limit render lighting without using vertex shaders. The second allows the display of dynamic shadows from a directional light source. The latter allows the rendering of multiple dynamic shadows from many different light sources. Details are explained in the Unity documentation. To improve performance on older machines while keeping good visuals, we decided to use the forward rendering in all scenes.

1 2

Conclusion

We have explored many facets of the game engine Unity and make use of the different opportunities offered to developers.

It was a rewarding and exciting experience. We are now perfectly comfortable with Unity and the development of dedicated scripts.

In terms of possible improvements, we plan to add more features such as the multiplayer mode (two or three players at the same time), improve the system of scoring and finally improve the visual aspect of the game in order to market it.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s