Using real JavaScript with Unity

Update notes

Last update and review: 2023-01-22.

  • The demo project has been updated tested with Unity 2021.3.16f1 LTS
  • The demo project of this tutorial is available here.
  • The tutorial code updated with the necessary fixes

What

This tutorial will tell you how to use Jint engine to write cross-platform games in a real JavaScript ES6 with Unity. It's not to be confused with UnityScript language, that is .NET js-like syntax, and has little common with JavaScript.

Why

If you make relatively complicated games, like RPG-s, and so on, you probably need a good scripting language, to handle complex game story, NPC-s and object interactions, custscenes, events, etc. Then, while C# language is good for engine logic, it's not designed for scripting. It's simply has too much verbosity and boilerplate, and too little flexibility for the creative part of the game. You will probably also need a scripting language that is easily understandable by game scripters, and modders, that are not necessary programmers.

Many big projects choose Lua for this purpose. Lua is a great dynamic language, and it has a lot of similarities with JavaScript. You can also make Lua work with Unity. However, here I want to show how to use JavaScript, because it gives the following advantages:

  • It has a familiar C-like syntax, as opposite to a bit weird syntax of Lua.
  • It has a huge developers community, and npm library with tons of open source packages, including the game development ones, like dialogs, quests, pathfinder, and anything else.
  • It has a lot of well established development tools, including IDEs, unit test libraries, linters, etc.

If you decide to use JavaScript in Unity, among many other features, you are able:

  • Write logic of your game in a multi-paradigm, dynamically typed language with strong concepts of meta-programming, where you can both create a beautiful architecture, and unleash your creativity when coding the game world without loosing focus on technical stuff.
  • Make your game scripts logic abstracted from lower level engine logic, also allowing to write automated tests for your story, dialogs and interactions, without even running Unity engine.
  • Easily expose your game logic to the community, so fans can create mods and addons.
  • Make your game portable to any other engine than Unity, if needed
  • Have access to the npm library with thousands of free javascript libraries and tools.

If you are a professional JavaScript developer, or if you just love JavaScript, but want to make a Unity game, then this tutorial can be especially good for you.

This tutorial will be also useful for non-unity developers, who just want to setup Webpack/Babel with Jint. In this case, jump directly here.

This tutorial will cover

1) Basic setup and usage

2) NPM project and ES6 setup

3) Some useful operations with Unity and JS

4) Build and automated tests

Prerequisites

  • You have some experience in both JavaScript, C# and Unity
  • You have some experience with command line tools and npm

This tutorial will use very simple MonoBehaviour code as example, as its goal is to show how to use Javascript in Unity, not how to create an engine architecture, or a game.

Let's do it

To run JavaScript engine from .NET we will use Jint library. Jint is a Javascript interpreter for .NET which provides full ECMA 5.1 compliance and can run on any .NET platform.

Creating a project and setting up Jint

1) Create a project in Unity

2) Get the Jint.dll from NuGet packages. For this do the following:

(Note: This tutorial uses the latest stable version of Jint for the moment: 2x. If you have issues with performance in your game, you can also try the Jint 3x prerelease version that is reported to be faster. For this you will need to download both Jint and Esprisma dlls. See the comments for more details.)
  • Rename jint.2.11.58.nupkg to jint.2.11.58.zip and unpack it
  • Take the Jint.dll from the folder lib/netstandard2.0 of the package

3) Make sure your projects uses .NET Standard 2.0 in Edit -> Project Settings -> Player This is recommended, as it is smaller gives the compatibility with the all the platforms Unity supports.

pic1

NOTE: if you rather want to use .NET 4.x setting, then take the Jint.dll from the corresponding folder from the package in the previous step.

4) Create a folder Plugins in your Assets and drag Jint.dll there.

pic2

5) Let's create a C# MonoBehavior called JavascriptRunner.cs on a scene object and call some JavaScript from it:

using UnityEngine;
using Jint;
using System;

public class JavascriptRunner : MonoBehaviour
{
    private Engine engine;

    // Start is called before the first frame update
    void Start()
    {
      engine = new Engine();
      engine.SetValue("log", new Action<object>(msg => Debug.Log(msg)));

      engine.Execute(@"
        var myVariable = 108;
        log('Hello from Javascript! myVariable = '+myVariable);
      ");
    }
}

Here we create new Engine object from Jint and call a very simple Hello World code in JS. Now attach the JavascriptRunner to the MainCamera on the scene and press Play.

You will see the following output in the console:

pic3

Note how we make JavaScript call the Unity Debug.Log by proxying the call to log function in JavaScript:

engine.SetValue("log", new Action<object>(msg => Debug.Log(msg)));

This is direct function call, where you can call any C# functions from javascript. There is also other ways to call the C# code, let's see them.

Calling Unity C# code from JavaScript

There are several ways to bind C# objects to JavaScript. As shown above, we can easily bind C# functions to JS. For non-void functions, that need to return value, you can use Func delegate. Change the code as following and press Play:

void Start()
{
  engine = new Engine();
  engine.SetValue("log", new Action<object>(msg => Debug.Log(msg)));
  engine.SetValue("myFunc", 
    new Func<int, string>(number => "C# can see that you passed: "+number));

  engine.Execute(@"
    var responseFromCsharp = myFunc(108);
    log('Response from C#: '+responseFromCsharp);        
  ");
}

Now you can see on the Console:

Response from C#: C# can see that you passed: 108

We created a function that JavaScript can call and get some value from your C# API.

But Jint would not be so powerful if it didn't allow to proxy the whole class from C# to Javascript. That's very handy when you need to give the JS engine access to part of your API. Let's do it. Modify the code as following and run it:

using UnityEngine;
using Jint;
using System;
using Jint.Runtime.Interop;

public class JavascriptRunner : MonoBehaviour
{
    private Engine engine;

    private class GameApi {
      public void ApiMethod1() {
        Debug.Log("Called api method 1");
      }

      public int ApiMethod2() {
        Debug.Log("Called api method 2");
        return 2;
      }
    }

    // Start is called before the first frame update
    void Start()
    {
      engine = new Engine();
      engine.SetValue("log", new Action<object>(msg => Debug.Log(msg)));

      engine.SetValue("GameApi", TypeReference.CreateTypeReference(engine, typeof(GameApi)));

      engine.Execute(@"
        var gameApi = new GameApi();
        gameApi.ApiMethod1();
        var result = gameApi.ApiMethod2();
        log(result);
      ");
    }
}

Notice, that we added a class GameApi and proxied it to Javascript. You can proxy like this any C# class, or even Enums, that is very handy:

engine.SetValue("GameApi", TypeReference.CreateTypeReference(engine, typeof(GameApi)));
engine.SetValue("MyEnum", TypeReference.CreateTypeReference(engine, typeof(MyEnum)));

To use it in javascript, we instantiate it using new operator:

var gameApi = new GameApi();

Other than that, we can also proxy an existing instance of a C# class to exchange data between the Unity and Javascript engine. Let's say we have a WorldModel object, that has some data, and we want to proxy it to JavaScript:

using UnityEngine;
using Jint;
using System;

public class JavascriptRunner : MonoBehaviour
{
    private Engine engine;

    private class WorldModel {
      public string PlayerName {get; set; } = "Alice";
      public int NumberOfDonuts { get; set; } = 2;

      public void Msg() {
        Debug.Log("This is a function");
      }
    }

    // Start is called before the first frame update
    void Start()
    {
      engine = new Engine();
      engine.SetValue("log", new Action<object>(msg => Debug.Log(msg)));

      var world = new WorldModel();
      engine.SetValue("world", world);
      Debug.Log($"{world.PlayerName} has {world.NumberOfDonuts} donuts");

      engine.Execute(@"
        log('Javascript can see that '+world.PlayerName+' has '+world.NumberOfDonuts+' donuts');
        world.Msg();
        world.NumberOfDonuts += 3;
      ");

      Debug.Log($"{world.PlayerName} has now {world.NumberOfDonuts} donuts. Thanks, JavaScript, for giving us some");
    }
}

Press Play and watch the fun on the Console. Here we have proxied an existing object to JavaScript. You can see that we can both read and write to C# object from the JS side. Like this you can easily expose the shared data to your JS engine.

There are also several other ways of exposing the C# code to JavaScript. You can even expose the whole CLR with all namespaces, even though it's not recommended. You would rather expose only the API that your scripter or modder is supposed to call. But if you need to get more knowledge about interoperability, read the Jint manual

Loading the scripts from files

Of course we will have our JavaScript code sitting somewhere in files, not hardcoded in C# like in examples above. Let's do this, so we later can start to setup the whole JavaScript project for our game.

In your Unity project, create a folder, named for example Game on the same level where Assets exists. This will be a folder for our JavaScript project. It's good to not create this folder inside of Assets, so Unity doesn't try to import javascript files and create .meta files for them.

pic4

Let's then create a file named index.js and put it into this folder. This will be our main file, from which the game scripts will start. You can of course name this file how you want, but I will use index.js in this tutorial. Let's put there some code.

function hello() {
  return "Hello from JS file!"
}
log(hello());

Let's modify the JavascriptRunner.cs to load code from file. Then, press Play to see how it works.

using UnityEngine;
using Jint;
using System;
using System.IO;

public class JavascriptRunner : MonoBehaviour
{
    private Engine engine;

    // Start is called before the first frame update
    void Start()
    {
      engine = new Engine();
      engine.SetValue("log", new Action<object>(msg => Debug.Log(msg)));
      engine.Execute(File.ReadAllText("Game/index.js"));
    }
}

As you can see, it's quite simple, as we use the same method engine.Execute, but pass there the text, loaded from file.

Here it's important to understand, that SetValue and Execute action we perform, add the objects to the same JavaScript scope. It means, that any code in index.js will have access to the log or any other objects we inject. Script in index.js will also have access to the result of any previous Execute command. For example:

engine.Execute(@"var myVar = 1");
engine.Execute(File.ReadAllText("Game/index.js"));

The code in index.js will be able to see myVar variable. This is one of the simple ways to split your code into modules that see each other, or implement sort of require function, that will dynamically load another file to the scope. But in the next parts of the tutorial I will show how we can use Webpack, and standard import statements.

Also you can easily call "hello" function in JavaScript, and get result from it in C# like this:

engine.Execute(File.ReadAllText("Game/index.js"));
engine.Execute("hello()");
var functionResult = engine.GetCompletionValue().AsString();
Debug.Log("C# got function result from Javascript: "+functionResult);

If you now press Play, then you will see the following on console:

C# got function result from Javascript: Hello from JS file!

Handle exceptions

Let's add the code to handle exceptions, happened in javascript and show some info.

Modify your JavascriptRunner.cs like this.

using UnityEngine;
using Jint;
using System;
using System.IO;
using Jint.Runtime;
using System.Linq;

public class JavascriptRunner : MonoBehaviour
{
    private Engine engine;

    // Start is called before the first frame update
    void Start()
    {
      engine = new Engine();
      engine.SetValue("log", new Action<object>(msg => Debug.Log(msg)));
      Execute("Game/index.js");
    }

    private void Execute(string fileName) {
        var body = "";
        try {
          body = File.ReadAllText(fileName);
          engine.Execute(body);
        }
        catch(JavaScriptException ex) {
          var location = engine.GetLastSyntaxNode().Location.Start;
          var error = $"Jint runtime error {ex.Error} {fileName} (Line {location.Line}, Column {location.Column})\n{PrintBody(body)}";
          UnityEngine.Debug.LogError(error); 
        }
        catch (Exception ex) {
          throw new ApplicationException($"Error: {ex.Message} in {fileName}\n{PrintBody(body)}");
        }
    }

    private static string PrintBody(string body)
    {
      if (string.IsNullOrEmpty(body)) return "";
      string[] lines = body.Split(new[] { "\r\n", "\r", "\n" }, StringSplitOptions.None);
      return string.Join("\n", Enumerable.Range(0, lines.Length).Select(i => $"{i+1:D3} {lines[i]}"));
    }
}

We added Execute private function that executes a script and handles JavaScriptException for runtime error, and general Exception for parsing and IO errors. It prints the line and column information, and also the code body with line numbers. Try it by adding some wrong code or unknown variable to index.js and see how it works:

function hello() {
  return "Hello from JS file! "+someVar;
}
log(hello());

On console you will see:

Jint runtime error ReferenceError: someVar is not defined Game/index.js (Line 3, Column 32)
001 
002 function hello() {
003   return "Hello from JS file! "+someVar;
004 }
005 
006 log(hello());
UnityEngine.Debug:LogError(Object)
JavascriptRunner:Execute(String) (at Assets/JavascriptRunner.cs:29)
JavascriptRunner:Start() (at Assets/JavascriptRunner.cs:17)

Now you have a fully working JavaScript ES5 project with Unity. In the next chapters we will see more advanced topics - how to use the JavaScript ES6 with Jint, how to setup npm, unit tests, etc.

Setting up Webpack and Babel to enable ES6 and npm packages support

In this part of tutorial we will add a basic npm project structure (similar to React or Vue.js). Here we will do 3 things:

  • enable npm packages support so you can use any npm packages in your code
  • add Babel so you can use all advantages of ES6 JavaScript, that will be converted to ES5, which is fully supported by Jint at the moment.
  • add Webpack to support modules and pack your code in a single bundle file

For this part of tutorial you will need command line. I will show examples with Terminal command line in Mac, but in Windows you can use WSL or Powershell. Also you will need to install Node.js and npm before you start. If you need help on how to do it, see here.

In command line cd to the Game folder of your project, where index.js is located: pic5

Now run

npm init -y

pic6

This will initialize empty npm project in your folder by creating a package.json file. Open this file in a text editor, delete its contents and put the following to it:

{
  "name": "my-cool-game",
  "version": "0.0.1",
  "author": "",
  "scripts": {
    "build": "webpack --mode production"
  }
}

We can specify the name of the project, version, author and other fields here. The important though is the scripts section, where we have our webpack build target, that will pack our ES6 code and convert it to ES5. Also note, that the name of the project must be dashes-separated.

To make it work, we need to install Webpack and Babel. Run the following 2 commands in your command line, being in Game folder:

npm i webpack webpack-cli --save-dev
npm i @babel/core babel-loader @babel/preset-env --save-dev

After installation is finished, your package.json content will look like this:

{
  "name": "my-cool-game",
  "version": "0.0.1",
  "author": "",
  "scripts": {
    "build": "webpack --mode production"
  },
  "devDependencies": {
    "@babel/core": "^7.9.6",
    "@babel/preset-env": "^7.9.6",
    "babel-loader": "^8.1.0",
    "webpack": "^4.43.0",
    "webpack-cli": "^3.3.11"
  }
}

The versions of packages can be different, but if they are there, it means, that babel and webpack are successfully installed. You will also see that package-lock.json file, and node_modules folder are created. You don't need to care about those, as they managed by npm. However, if you are using version control, then ignore node_modules, because it contains all the downloaded npm packages and should not be versioned. You can delete the node_modules folder at any time, and restore it again by running npm install

The next step is to enable Babel, that will transpose JavaScript code to ES5. Create a file named .babelrc in your Game folder and put the following inside:

{
  "presets": ["@babel/preset-env"]
}

And the last step is to configure Webpack. For this create a file named webpack.config.js in your Game folder and put the following inside:

module.exports = env => {
  return {
    entry: {
        app: './index.js'
    },
    module: {
        rules: [
          { test: /\.js$/, loader: 'babel-loader' }
        ]
    },
    optimization: {
        minimize: env != 'dev'
    }
  };
};

This tells Webpack to read your index.js and convert it to the bundle. Your should now have the following items in the Game folder:

.babelrc
index.js
node_modules
package-lock.json
package.json
webpack.config.js

Let's try now how the conversion works. Put some ES6 code in our index.js

const hello = () => {
  return "Hello from JS ES6 file!";
};
log(hello());

This contains const and an arrow function that is ES6 only features. If you try to run your Unity project, you will see the following error:

ApplicationException: Error: Line 2: Unexpected token ) in Game/index.js
001 
002 const hello = () => {
003   return "Hello from JS ES6 file!";
004 };
005 
006 log(hello());
JavascriptRunner.Execute (System.String fileName) (at Assets/JavascriptRunner.cs:32)
JavascriptRunner.Start () (at Assets/JavascriptRunner.cs:17)

That's because the current version of Jint supports only JS version ES5. Later Jint will also add full ES6 support and the conversion step might not be needed. Let's now run Webpack to convert and bundle our code.

In command line run the following

npm run build

After the command is run successfully, you should see a new dist folder in the Game folder. That's a folder where Webpack will put the "compiled" version of the Javascript. If you now open Game/dist/app.js file, you will see a minimized JavaScript text. This is the file, openable by Jint, as it has only ES5-compatible code.

Let's now change our Start method in JavascriptRunner.cs to open it:

void Start()
{
  engine = new Engine();
  engine.SetValue("log", new Action<object>(msg => Debug.Log(msg)));
  Execute("Game/dist/app.js");
}

Now press Play and see output on Unity console, received from originally ES6 code.

Hello from JS ES6 file!

Now we have set up the full npm-powered project, where you can also add any npm package!

Note, that every time after you change your JavaScript and before to test it in Unity, you will have to run npm run build (or npm run dev as will be shown next) in order for your ES6 scripts to compile to dist/app.js

Non-minimized bundle setup and modules

Let's have a look at some handy features of Webpack. As you noticed, app.js contains the minimized javascript. It has little size and is good for production, but for debugging errors, where you want to see the code line-by-line it's not very useful. For this we can tell webpack to disable the minimizing. Let's make another npm command that will produce a similar app.js but will not minimize it.

Add dev target to your package.json in "scripts" section:

  "scripts": {
    "build": "webpack --mode production",
    "dev": "webpack --mode production --env dev"
  },

This will make now Webpack to produce not minimized script, if you run the dev target instead of build. Try it. In command line, run

npm run dev

Now see the contents of your dist/app.js. It's not minimized anymore! You can press Play in Unity and make sure it still works.

Let's now see how to split your JavaScript code by modules and use them. Let's create another file, named MyModule.js in Game folder with the following:

export const myFunction1 = () => {
  return "This is function 1 from my module";
}

export const myFunction2 = () => {
  return "This is function 2 from my module";
}

We have created a module that exports 2 functions. Now in our index.js or in any other javascript file, we can import those functions. Replace the code in index.js with the following:

import { myFunction1, myFunction2 } from './MyModule';

const hello = () => {
  return "Hello from JS ES6 file!";
};

log(hello());
log(myFunction1());
log(myFunction2());

Now run npm run dev to build the bundle, and then press Play in Unity. You will see the output from module functions. Like this you can decompose your code to files very easily. Of course, you can also put your modules in different subfolders. You can read more about ES6 modules system here.

Webpack, and global variables

When webpack creates a bundle, it's run in a closed function scope. It means, that from this scope, you cannot create variables in global scope. So, if you execute several bundles from your Engine they cannot communicate with each other, and also C# cannot call the JavaScript functions from your bundles scope. When you need to write to global scope from your module, let me show an easy way to do it with Jint.

Modify the JavascriptRunner.cs to have code like this in Start:

    void Start()
    {
      engine = new Engine();
      engine.SetValue("log", new Action<object>(msg => Debug.Log(msg)));
      engine.Execute("var window = this");
      Execute("Game/dist/app.js");
    }

Before running our bundle, we have injected window variable that references the global scope. Now you can do the following. Add code to your index.js:

window.thisIsGlobalVariable = 108;

log("I can see global variable: "+thisIsGlobalVariable);

Build the code with npm run dev, press Play and see the result. Variables in the global context are accessible anywhere in your Javascript code. Use them rare: only when you really need it. Of course instead of window you can use global, or any other variable name to hold reference to the global scope, but keeping window one is very recommended, as it's standard, and some libraries will use it too.

Let's now try to call the function hello() from C# side.

    void Start()
    {
      engine.Execute("var window = this");
      Execute("Game/dist/app.js");

      engine.Execute("hello()");
      Debug.Log("C# got result from function: "+engine.GetCompletionValue());
    }

If you press Play() now, then you will have the following error on unity console:

JavaScriptException: hello is not defined

That's because the generated code in app.js is placed in a closed function scope. So, to make a function accessible from Jint, we need to make it global. Open your index.js and add the following line after hello() function:

const hello = () => {
  return "Hello from JS ES6 file!";
};
window.hello = hello;

Now run npm run dev and press Play. And voilà:

C# got result from function: Hello from JS ES6 file!

Like this, you can decide, which of js functions you want to expose to your C# engine.

Saving javascript state of the game

Since your gameplay logic is going to be in JavaScript, all the state, like player parameters, inventory, quest states, etc, will be contained there. When your game needs to be saved and loaded, the state must be somehow passed to Unity C# code, so that it could save/load it. There is many ways to organize the state in JavaScript. Let's take a look at a simple and recommended one, where all our game state that is intended to be saved is contained in a single object. The other javascript game objects can by one or other way read this state, and modify it when needed.

Write the following to index.js

var state = {
  name: "Alice",
  level: 2,
};

const printState = () => {
  log(`JS state name: ${state.name}; level: ${state.level}`);
};
printState();

If you build it and press Play, you will see the state of your game in the unity console. Now, let's add a global function to index.js, called getGameState, that will pass this state to Unity in json format.

window.getGameState = () => {
  return JSON.stringify(state);
};

Now let's add a button to our Unity project that will save the game state. In JavascriptRunner.cs add the following function:

    private void OnGUI() {
      if (GUILayout.Button("Save game")) {
        string jsGameState = engine.Execute("getGameState()").GetCompletionValue().AsString();
        File.WriteAllText("savegame.json", jsGameState);
        Debug.Log("Game saved");
      }
    }

You can see that we also can get the result of called js function in one line, because Jint returns instance of Engine from Execute() call. This is very handy. Compile the js with npm run dev and press Play. Now you will see Save game button on the screen. Press it, and then have a look at your Unity project folder. There will be a file named savegame.json

pic7

As you can see, the contents of this file represent the state object from JavaScript.

Now, let's modify our savegame.json. Open this file in the text editor and write:

{"name":"Alice","level":80}

So, we cheated and gave Alice level 80. Now we can load the game and see our changes. Let's create a setGameState function in index.js

window.setGameState = (stateString) => {
  state = JSON.parse(stateString);
  printState();
};

This function will update the state object from the passed json string and will print it. Let's add Load game button to the OnGUI function in JavascriptRunner.cs:

  if (GUILayout.Button("Load game")) {
    string stateString = File.ReadAllText("savegame.json");
    engine.Invoke("setGameState", stateString);
  }

This will read our saved game and pass it to JavaScript by calling setGameState. Notice, that we use Invoke here instead of Execute. Invoke is a method of Jint that allows to execute a javascript function with given arguments. Since json string can contain line breaks, we can not simply concatenate it in the Execute method.

Now build and run the game as usual, then press Load game button. You will see the following on console:

JS state name: Alice; level: 80

setTimeout and Unity coroutines

Let's now see something more interesting. Jint doesn't provide you with setTimeout function, leaving the implementation to the client. By default all the calls that you make to your JavaScript code, and everything, that Jint calls back to C# happen in the same thread. In our case it's main Unity thread. Thus it's up to you how you want to implement the setTimeout and promises behavior, and how you want to manage the multi-threads and synchronization.

In this section I will show how to implement setTimeout and some promises using Unity coroutines mechanism. This mechanism allows user to schedule parallel execution in the Unity main thread without the need to deal with multi-threading. This is very powerful for handling game animations, sequences of events, etc.

Let's start with trying to call setTimeout in index.js, that will do some game action. For example, will change the label in our game UI. In your index.js write the following:

setText("This is a text");
setTimeout(() => setText("And now it is changed"), 5000);

In JavascriptRunner.cs let's add a code that outputs the text label to UI, so the beginning of this file will look like this:

public class JavascriptRunner : MonoBehaviour
{
    private Engine engine;

    private string labelText;

    // Start is called before the first frame update
    void Start()
    {
      engine = new Engine();
      engine.SetValue("log", new Action<object>(msg => Debug.Log(msg)));
      engine.SetValue("setText", new Action<string>(text => this.labelText = text));
      engine.Execute("var window = this");
      Execute("Game/dist/app.js");
    }

    private void OnGUI() {
      GUILayout.Label(labelText);
      ...

Here we added a private string labelText;, and a function that can set it from js: engine.SetValue("setText", new Action<string>(text => this.labelText = text)); Finally, we have added a Label of the text to display in the UI: GUILayout.Label(labelText);

We expect the text "This is a text" to appear first. And then, in 5 seconds, it should be changed to "And now it is changed". Let's check if it's so. Build the scripts using npm run dev and press Play. You will see something like this: pic8

The first part of text is set, but then, there is an error on console. This is expected, as Jint has no setTimeout implementation. Let's make a simple version of it. In your JavascriptRunner.cs in Start() function, before we execute app.js, add the following:

      engine.SetValue("setTimeout", new Action<Delegate, int>((callback, interval) => {
        StartCoroutine(TimeoutCoroutine(callback, interval));
      }));

Now, add the coroutine function to JavascriptRunner class:

    private IEnumerator TimeoutCoroutine(Delegate callback, int intervalMilliseconds) {
      yield return new WaitForSeconds(intervalMilliseconds / 1000.0f);
      callback.DynamicInvoke(JsValue.Undefined, new[] { JsValue.Undefined });
    }

This coroutine does 2 following actions:

  • Waits for the given timeout (note that we divide by 1000 as WaitForSeconds instruction in Unity requires time in seconds)
  • Dynamically executes the callback, that JavaScript code passed to the setTimeout function.

Also, for this code to build, you will need to add 2 using instructions in JavascriptRunner.cs:

using Jint.Native;
using System.Collections;

Now, build using npm run dev and press Play. See how text is being changed in 5 seconds. We have just made a setTimeout function work. If you need, you can likewise also implement clearTimeout, setInterval, and any other API functions. You can also expose functions that call any other Unity coroutine, for example call animation from your JavaScript.

Using promises

setTimeout is not always very convenient function, as it uses callback. To not break the code flow, it's nice to use promises. Let's implement a promise that waits for some time.

Let's remove 2 lines that call setText and setTimeout from index.js and add instead the following logic:

const wait = (milliseconds) => new Promise(resolve => {
  setTimeout(() => resolve(), milliseconds);
});

const asyncFunction = async () => {
  setText("This is a text");
  await wait(5000);
  setText("And now it's changed after await");
};

asyncFunction();

Here we added a promise, that uses our setTimeout in order to wait for the given amount of milliseconds, and asyncFunction that sets initial text, awaits 5 seconds, and changes the text. This way is much more elegant, than callback, as it allows to use asynchronous logic and avoid callbacks.

However, to make it work, we need to install an extension to Babel, that will simulate Promises, generators, and other ES6 API. Here, in Jint it's not supported yet.

Open your command line in Game folder and add the following:

npm install --save @babel/polyfill

Now open your .babelrc file and change it, so the content is like this:

{
  "presets": [
    [
      "@babel/preset-env",
      {
        "useBuiltIns": "usage",
        "corejs": 2
      }
    ]
  ]
}

This basically tells Babel to use emulation of Promises and other API, provided in polyfill package, as much as the JavaScript code requires it.

Now run npm run dev and press Play. Watch how the text changes in 5 seconds, by the effect of Promise.

Include javascript files into the built app

When we build the game, we need it to contain our javascript bundle inside, to have access to it. Unity has a good cross-platform way to do it, through built-in Resources system. Any file, put in Assets/Resources folder will be included into build.

Let's change our webpack.config.js so it write the output into the Assets/Resources instead of dist by default. We will also use .txt extension here instead of .js, so that the Unity could easily load the file as a text asset.

const path = require('path');
module.exports = env => {
  return {
    entry: {
        app: './index.js'
    },
    module: {
        rules: [
          { test: /\.js$/, loader: 'babel-loader' }
        ]
    },
    output: {
      filename: 'app.txt',
      path: path.resolve(__dirname, '../Assets/Resources')
    },
    optimization: {
        minimize: env != 'dev'
    }
  };
};

Now run npm run dev, and see that Resources folder appeared, containing our bundle: pic9

Let's now make changes to JavascriptRunner.cs in order to load our script from resources. In Execute method replace the line body = File.ReadAllText(fileName); with

body = Resources.Load<TextAsset>(fileName).text;

Then in Start function replace the line Execute("app.js"); with

Execute("app");

That's because Unity Resources.Load method expects filename only, without extension.

Now press Play and check the application works. After that let's make a build. In command line run:

npm run build

This will make a minimized version of app.txt, that has much less size and is good for production. Now build project in Unity to your platform. Run the result application and check it works.

Setting up unit tests for the game logic in JavaScript

Unit tests, and other form of automated tests can keep the low level of bugs and high quality of your game project. Especially it's important for a complex story logic. You can write tests that check individual parts of code, but also integration tests, that simulate the whole game level and test actions player can do in most situations. It's recommended to write tests before or along with adding new features and story parts to the game.

If you are interested in automated tests for your game logic, let me here show how to easily make one. There are quite a few good test frameworks for JavaScript. In this tutorial I will use a very popular one, called jest.

Open command line in Game folder and add jest package:

npm install --save-dev jest

Let's test the logic of asyncFunction:

const asyncFunction = async () => {
  setText("This is a text");
  await wait(5000);
  setText("And now it's changed after await");
};

We will test that it calls first setText with some text, and then, second time calls it with different text. This is good for tutorial, as it will also demonstrate how we can mock functions for the unit tests. Before we start testing, we need to move asyncFunction to the module, that exports it. Let's move it together with the wait function out of index.js to MyModule.js:

const wait = (milliseconds) => new Promise(resolve => {
  setTimeout(() => resolve(), milliseconds);
});

export const asyncFunction = async () => {
  setText("This is a text");
  await wait(5000);
  setText("And now it's changed after await");
};

In index.js keep only the call to the function and import statement:

import { asyncFunction } from './MyModule';

...

asyncFunction();

Run npm run dev and press Play to check everything is done right and still works.

Now, in the same place where you have MyModule.js, create a file, named MyModule.test.js. There is a convention in JavaScript world to put the test file near the tested one. It's very handy. Put the following contents into MyModule.test.js

import { asyncFunction } from './MyModule';

test('sets initial text', () => {
  // arrange
  window.setText = jest.fn();

  // act
  asyncFunction();

  // assert
  expect(setText.mock.calls[0][0]).toBe("This is a text");
});

test('sets second text', async () => {
  // arrange
  window.setText = jest.fn();

  // act
  await asyncFunction();

  // assert
  expect(setText.mock.calls[1][0]).toBe("And now it's changed after await");
});

Here we made 2 tests, that mock function setText and check it's called with a given argument. setText.mock.calls[0][0] means take the first call of the function, and the first argument. Like this you can easily check the called function arguments and results. Jest is very simple and powerful at the same time. You can read more about its features here

Now let's run our tests. We need to add "test" target to the packages.json:

  "scripts": {
    "build": "webpack --mode production",
    "dev": "webpack --mode production --env dev",
    "test": "jest"
  },

Now, in command line in Game folder run:

npm run test

After tests are finished, you will see the following result:

 PASS  ./MyModule.test.js (6.151 s)
  ✓ sets initial text (4 ms)
  ✓ sets second text (5002 ms)

Test Suites: 1 passed, 1 total
Tests:       2 passed, 2 total
Snapshots:   0 total
Time:        6.912 s
Ran all test suites.

This draws the end of this tutorial for now. Enjoy writing your games in Unity and JavaScript! In case you need, find the full code of this tutorial project here. In the git log you will see different commits, that match its different stages.

Meet the IoC container

In the previous article there was a general talk about DI. Now let's see the basic problems and principles of the IoC container.

MinDI

Please refer to GitHub repository to use the framework itself: MinDI on GitHub

Introduction

Here are some common questions, that arise with an IoC container we want to create:

  • How will we access the container itself throughout the application? Will it be one singletone? Sounds a bit crap, and similar to Service Locator then. Will there be only a single place where we create all the classes and thus have access to the container? Then how can we easily create new objects and inject dependencies during runtime?
  • How are we going to limit the access to different dependencies? Our container is a universal factory, that gives us an access to every interface of the application. Can we limit it, so different classes could have access only to what is defined in their dependency contract? So we avoid implicit dependencies?
  • How are we going to inject our dependencies? Using constructor? Using properties? How do we handle the complex graphs with cross-references?
  • How do we define all our dependencies in one place of applications and keep those definitions refactoring-friendly?
  • Will we support multiple layers, where we can redefine some of the dependencies for some of the parts in the application?

Different IoC/DI solutions have different approaches to those problems. Here I would like to introduce how it's solved in MinDI and and show some examples. Please note, that this article is not a tutorial, but rather a methodological description of MinDI library. The tutorials will be posted later.

MinDI is a IoC/DI framework, that was initially started as a project to extend the MinIOC framework with some syntax sugar, but then quickly turned into its own project, with much more advanced features and ideology.

Dependency Injection

In such languages as C# or Java, the dependencies can be resolved in several ways: passing them in constructor, assigning them to fields / properties of the object, or passing them in a method of the object. MinDI uses reflection to make such dependency injection automatic.

There supported two ways of automatic dependency injection:

  • Property based
  • Method based

The property based way is recommended, though the method based is also supported (basically because it was originally supported in MinIOC). The constructor automatic dependency injection is dropped. The reason of it is because the constructor dependency injection doesn't allow us to use complex object-graph with the circular dependencies. Even though the circular dependeincies are most of the time not good, they are quite usable in some data structures (graphs, trees, DOM, etc).

There is also usability reason why Property-based DI is recommended: it allows to easily specify the dependency contract, and refactor the amount of dependencies in the class. Using constructors or methods, it becomes quite bulky.

So, here is an example of class Earth that has dependencies on ISun and IMoon interfaces:

public class Earth : ContextObject, IEarth {
    [Injection] public ISun sun {get; set;}
    [Injection] public IMoon moon {get; set;}
}

In this simple example, the Injection attribute tells that those dependencies will be resolved automatically when an instance of Earth is created. An exception will occur, if those dependencies could not be resolved.

As we see, we depend on interfaces, not on the concrete implementations of Sun and Moon. That is an implementation of the big D principle: we depend upon abstractions and the abstraction are resolved by the IoC framework to the concrete implementations. That means, we can easily configure the concrete implementations of our interfaces in the single place of the application, without changing any dependent entities. That also encourages the programmer to use the big L principle: we write classes that don't know anything about the concrete implementations they use, and should still work if we exchange the implementations in the configuration of the application.

Please also pay attention, that the dependencies are declared public. This is not necessary, but is recommended. That is made to make class easily unit-testable. In a unit test we might want to create just an instance of class Earth, and inject/mock the sun and moon properties manually. As we always depend on abstractions in our application, one should not worry about making those properties inaccessible for client code: the client can only access IEarth interface, but not the instance of the Earth directly. And the IEarth interface limits the access to the properties in any necessary way (they might be not exposed, exposed for get only, or exposed for both get and set). Using interface-based approach in your programs is very important principle that allows us to totally depend on abstractions only and easily substitute the concrete implementatons when needed. That makes the code very flexible and easily refactorable, and this encourages to use SOLID principles.

Even though MinDI doesn't require to use interface-based approach only, this approach is highly recommended for any program. Interfaces in C# exist exactly for this, but the issue with the C# language is that they have given us interfaces but have not provided an easy way of using them, without creating an extra pain. With a DI framework like MinDI, using interfaces-based approach turns into an easy walk and pleasure.

So, to summarize this very important principle: we should never depend on a class anywhere in the code (unless it's pure data class or structure). We should always depend on the interfaces.

Let's now see how and where we specify that ISun should resolve to an instance of class Sun and IMoon to an instance of class Moon.

IoC container and context-oriented approach

The IoC container, or the Context, how it's called in MinDi, is basically a dictionary, that has interface type at minimum as a key and the factory that specifies how we create the object for this interface as a value. Additional features of the container is to control the lifetime of the objects. Even more advanced feature, is to provide the multi-layer context for the dependency injection.

Let's see a simple example:

public MyContextInitializer : IApplicationContextInitializer {
    public void Initialize(IDIContext context) {
        context.m().Bind<ISun>(() => new Sun());
        context.m().Bind<IMoon>(() => new Moon());
    }
}

Here we defined 2 bindings: each injection of ISun will resolve itself to new instance of class Sun, and the same with Moon. Now whenever the new instance of Earth is created, the Injection of ISun will be resolved to the new Sun(), and so on with IMoon. Of course for this to work, the class Earth itself should be resolved from the same dependency injection container. When any of the classes is created from the context, its dependencies are fulfilled automatically from the same context. So the IoC container is the context of the possible dependencies. Each class provides the set of [Injection] attributes, that is called in MinDI contract on dependencies. Like this we can easily see which exactly dependencies this or that class uses. Having an explicit contract is very benefitial when refactoring and analyzing the code, trying to minimize the amount of the entities each class depends on.

As we can see, if we want ISun to resolve to some MySuperSun instance instead of Sun, it's very easy to change this only in one place of the application: in the context initializer. All the objects, that depend on ISun will now use another class as implementor:

context.m().Bind<ISun>(() => new MySuperSun());

Let's now see how we can access the context in the application to resolve our instances, and what special usage and phylosophical meaning the context has.

3 levels of the application

Unlike some other DI frameworks, that use XML to define the dependencies, MinDI uses the lambda-syntax. That allows the code to be easily refactorable. If we rename a class or an interface, it will be automatically reflected in the context initializers. Another benefits of lambda-factories, is that instantiating such objects uses new operator, and is much faster, than reflection, that some of the popular DI frameworks use.

Let's now talk a little about the access to our context in the application. Unlike the class, which has strictly access to only its own dependencies, the the context is a universal factory, which can be used to resolve any of the interfaces used in the application. To obtain directly any concrete instance using the context it's enough to call the following:

var sun = context.Resolve<ISun>();

So this code will find a corresponding factory in the context and will create an object that implements interface ISun. If the user classes have direct access to the context, it creates problems - we can suddenly anywhere in the code resolve any interface, and thus create implicit dependencies, bypassing the contract on dependencies. It becomes impossible to easily say which dependencies are used in the class. So, to avoid this problem, MinDI doesn't allow a direct access to the context for the user-level classes.

The application in MinDI is conditionally divided in 3 levels of the access:

  1. The context initialization level. This is the place where we initialize the context, like MyContextInitializer in the example above. In this place we don't put any application logic, but only define which interface is resolved by what class, and also we define the lifetime and some other more advanced things in the scope of the IoC container.
  2. The user level. This is where all the application classes function. On this level we have no access to the context, but we have dependency contracts in the classes, so all the dependencies are resolved automagically.
  3. Open context level or factory level. This is special classes that implement different creational patterns - like factories and builders. Such classes have access to the context from one side, and are used by the user-level classes from another side. Such classes should not contain any logic but building other objects. Usually each factory or builder has a factory contract, that limits which exactly type of objects it can build.

To demonstrate how the factories work in MinDI, let's see a simple example. Let's say we want the class Earth to dynamically create some plants, using interface IPlant. It can create many instances of IPlant and it knows nothing about what concrete class will be used for the IPlant interface, as it should be defined only on the context initialization level.

So, somewhere in MyContextInitializer we define:

context.m().Bind<IPlant>(() => new WeedPlant());

Now in our class Earth we wanna have some code that spawns 3 plants:

public class Earth: ContextObject, IEarth {
    ...
    [Injection] public IDIFactory<IPlant> plantFactory {get; set;}
    ...

    void CreatePlants() {
        var plant1 = plantFactory.Create();
        var plant2 = plantFactory.Create();
        var plant3 = plantFactory.Create();
        ...
    }
    ...
}

We use a factory instead of creating the WeedPlant directly by new operator, which would make our Earth class tightly coupled with class WeedPlant. That would violate the big D, and make the instance of class Earth always depend on concrete instance of class WeedPlant. With using IDIFactory interface provided by MinDI, we easily resolved this problem. Now the class Earth knows that it will create some of the instances of IPlant, and doesn't know anything about which exactly IPlant implementation is used.

The way we declared IDIFactory is called in MinDI a contract on creation. Same as the contract on dependencies explicitly defines which abstractions this class depends on, the contract on creation defines which entities this class can create. For each type of entity this class can potentially create we add one more IDIFactory injection.

The construction parameters (e.g. the height of the plant) can be also passed to the factory by several ways in MinDI. This will be shown in the tutorial, and also there will be discussed a bit more of philosophical aspect of passing the parameters when creating an object through abstraction.

Back to the levels of the access, our class Earth is an entity that functions on the user level in MinDI. It has no direct access to the context, but it has explicit contracts on dependencies and creation. The class Earth, as you maybe noticed is inherited from ContextObject. It's a special object that makes auto-injection of dependencies possible on user-level. In fact, this object has the Context reference as a private field, so it works a bit like subconsious: it does the work behind the scenes, but is not accessible for the user level.

The philosophy of the context-oriented DI, and multi-layered context

Making more analogy with the human mind, the user level is a bit like our thoughts. You can think "plant", and you immediatelly imagine some sort of plant: you have a concrete visual image appearing immediately in your inner screen. In this analogy the word "plant" is an abstraction, an interface. It openly exists on the user-level, and you are directly aware of it. The image of plant is concrete implementation of this interface. Different people will have different images when they think the same word "plant". What exact image you will have when you think this word is the result of your individual association, that sits in your subconsious. You cannot know it until you think or say "plant", but then it appears immediately. The associations are formed in the mind as the result of life experience, and this level is not directly accessible for the "user" - the regular thinking mind. Each person have a different life experience and the different context. The same way the context initialization level in MinDI is an associative array, that is configured in a single place of the application, and is not dirrectly accessible for the user-level classes. However, as soon as we inject IPlant somewhere, it's immediately resolved to the concrete implementation (WeedPlant in our example).

Now if you want an analogy with the factory level, it's more like our imagination or an ability to think with abstractions. If you are in the room, and you have a plant on your table, it's like an [Injection], something that is already there. However, you are able to think about more plants, that don't exist here. This is like an abstract factory, which can create many instances of IPlant.

Let's see another interesting feature of our mind: our subconsious level works on the context-oriented basis. In a very simple example, if I ask you, "show me a plant", then you will point to the plant, that stays on your table. But if you don't have any, you will point me to the window, where e.g. we can see a bush outside. Here there is an example of multiple levels of context. The context of the room overrides the more global context of the world outside. So you will likely first search for the object in the room context, and only if not found, will search it in more global context. Of course our mind works much more complicated than this, we have many dynamic context layers that have also multiple cross-references, but this simple example helps to understand the context-oriented approach that is used in MinDI: the dependencies are resolved from the current context of the entity, if not found they are looked in the parent context, and so on, by the chain of prototype.

That means, that every Context in MinDI can have a reference to the parent context. That allows to create layers in the applications where some of the dependencies are "overriden", but the others are still used from the global context. It works very similar to the reference prorotype paradigm (like e.g. in JavaScript or Lua).

In standard MinDI application there are 3 main layers of the context (though you can create any number of them).

  1. Global layer: usually we define the global dependencies, that are common for the whole family of the applications. It's often the dependencies exported from the shared libraries.
  2. Application layer. This is the dependencies, that are particular for this concrete application.
  3. Custom layer. This is the dependencies that work only for a part of application, and their lifetime is less than the lifetime of the application. E.g. we create a window that overrides the behaviours of the keyboard event receiver with its own. Only this window uses different logic for the event receiver, but it doesn't affect the other parts of the application.

There can be also more layers, for example in the ASP.NET we can have a request level of dependencies, that are particular for each web request. In Unity3D application we have a scene layer of dependencies, that exist for every particular scene, and are not valid for others.

Let's see a short practical example. In a dll we have a Logger class, that implements ILogger interface. This logger uses ILogMessageWriter to specify how exactly we write the message.

public interface ILogger {
    void LogDebug(string message);
    void LogError(string message);
}

internal class Logger: ContextObject, ILogger {
    [Injection] public ILogMessageWriter writer {get; set;}

    public void LogDebug(string message) {
        writer.Write("[DEBUG] "+message);
    }

    public void LogError(string message) {
        writer.Write("[ERROR] "+message);
    }
}

The standard LogMessageWriter, our library provides, will use Console.WriteLine method:

public interface ILogMessageWriter {
    void Write(string message);
}

internal class ConsoleLogMessageWriter : ILogMessageWriter {
    public void Write(string message) {
        Console.WriteLine(message);
    }
}

We can bind our classes, library provides, in the global context initializer of the library:

public class LoggerLibraryGlobalContextInitializer: IGlobalContextInitializer {
    public void Initialize(IDIContext context) {
        context.m().Bind<ILogger>(() => new Logger());
        context.m().Bind<ILogMessageWriter>(() => new ConsoleLogMessageWriter());
    }
}

Now, if we import the dll or code with the logger into our application, the standard bindings will be available: wherever we inject ILogger in our application classes, using [Injection] attribute, it will just work, and write messages to the console. We derive from the standard MinDI interface: IGlobalContextInitializer, which means that those bindings will be put in the top-level context: the global layer.

Let's say we have a Unity3D application, where we want to output all our log messages not to the console, but to the Debug.Log() method, Unity provides, which will show them in the editor UI. We can do it very easy, withou chanhing the behaviour of any classes that use ILogger. Let's just create a UnityLogMessageWriter in our application:

internal class UnityLogMessageWriter : ILogMessageWriter {
    public void Write(string message) {
        Debug.Log(message);
    }
}

Now we need to bind it in the Application context layer, overriding the standard library binding:

public class MyUnity3DApplicationContextInitializer: IApplicationContextInitializer {
    public void Initialize(IDIContext context) {
        context.m().Bind<ILogMessageWriter>(() => new UnityLogMessageWriter());
    }
}

Notice, that we don't need to bind ILogger again. We just specified: in this application I want to use my own UnityLogMessageWriter to write Log messages, but I still want to use whatever logic the dll provides for the ILogger. We use IApplicationContextInitializer, which is another standard MinDI interface, specifying that this context initializer will populate the application layer. Application layer has the global context layer as the parent reference.

What now happens if we write

[Injection] public ILogger logger {get; set;}

in one of our Unity3D application classes.

  1. First, MinDI will try to find logger in the Application layer context. As it's not found there, it will go by the parent reference in the context and will try to find logger in the parent context.
  2. As our parent context is the Global context, provided by the dll, the ILogger binding is found there, and is resolved as new Logger() instance.
  3. Right after creating new Logger, MinDI will start injecting its dependencies, starting from the resolution context, i.e. the context we started to resolve this instance. It means it starts to inject the dependencies from the Application layer context.
  4. The ILogMessageWriter dependency exists in the Application context, thus it's resolved from there as new UnityLogMessageWriter. Now we use Logger instance, but substituted its dependency on ILogMessageWriter with our own instance. If we didn't bind our own ILogMessageWriter on the Application level, MinDI would still look into parent context, and eventually would find the resolution to new ConsoleLogMessageWriter as defined in the Global layer.

With the power of the layered context, you can override some of the dependencies when it's needed. On the inferior layers, the dependencies can be overriden for the parts of the application, like for some dynamically spawned objects, using the same way. Our application becomes context-oriented, where the logic depends on the context we are in at the moment.

Controlling lifetime

There is 2 principal types of objects lifetime in the context: concrete and abstract. Speaking in the programming terms, concrete means a singletone - an object that exists as the same concrete instance for anybody who depends on it. And the abstract is a multiple binding - any class, that depends on this object, will have a new instance created automatically. The concrete or singletone type of lifetime should not be confused with the Singletone anti-pattern. The common between them, is that we provide a way for an object to exist only as one instance. In the concrete lifetime binding there doesn't exist any global singletones though. Any singletone binding is only global within the context layer, it's defined.

On the human mind analogy level, the singletone binding matches the article "the". It means a concrete one entity, we are talking about in this context. That's why in the terms of the context it's called "concrete". The multiple binding matches the article "a". It means any entity of this type, not existing known instance yet. The concrete and abstract bidnings in the context match the singletone and multiple lifetime in the DI/IoC patterns and is indeed one of the fundamental principles of the programming logic and the human logic in general.

This is an example of concrete and abstract bindings:

public class LoggerLibraryGlobalContextInitializer: IGlobalContextInitializer {
    public void Initialize(IDIContext context) {
        context.s().Bind<IApple>(() => new Apple());
        context.m().Bind<IOrange>(() => new Orange());
    }
}

Every class that depends on IApple will use the same instance of the Apple. Every class that depends on IOrange will have it's own, new instance of the Orange. As we see, .s() and .m() methods are the special lifetime resolvers to define it. In MinDI you can extend the framework with your own liftime resolvers, so you can have more different liftime types, than just standard ingletone and multiple. This is done when you need some custom behaviour. For example, in ASP.NET MinDI uses .ses() lifetime to define a singletone existing in the user session. In Unity3D .mbm() and .mbs() lifetimes are used to define the MonoBehaviours.

Lazy construction

By default the singletones use lazy construction. It means that the singletone instance will be resolved as soon as the first class that depends on it is created. However, you can also use instant construction. Like this you construct your class immediatly in the context intializer. That can be usefull for huge objects, that you want to create during the initialization of the application. However it's even more often used for simple objects like enums or data types, that don't have external dependencies. The limitation with the instant construction is obvious: no dependencies can be resolved for such object, as they are created at the moment where the context is not yet built. However, there is another way of early instantiation, that is performed in the entry point of the application, but after the context is initialized. The instance construction is only recommended for the simple objects, like enums, etc.

Here is an example of the instant construction:

public class LoggerLibraryGlobalContextInitializer: IGlobalContextInitializer {
    public void Initialize(IDIContext context) {
        context.s().BindInstance<IApple>(new Apple());
    }
}

The instance of the IApple is immediately created here. Obviously, BindInstance method is only available for the .s() lifetime qualifier.

Subjective dependencies. Rebinding.

In MinDI, the dependencies are resolved starting from the context on which the object is created. The concrete(singletone) is always created on the same context it is defined. The abstract(multiple) object is always created on the same context it's called from. That implements a subjectivity principle: the abstract objects dependencies are always subjective, i.e. they take the dependencies from the context, they are requested from. Thus, requesting the same abstract object from different context, it will have different dependencies. Doing the same with a concrete object, it will have the same dependencies, injected from the context, it's defined in. That principle is important to understand. Let's see the following example.

In the global layer we have:

public class LoggerLibraryGlobalContextInitializer: IGlobalContextInitializer {
    public void Initialize(IDIContext context) {
        context.s().Bind<IAnotherLogger>(() => new SingletoneLoger());
        context.m().Bind<ILogger>(() => new Logger());
        context.m().Bind<ILogMessageWriter>(() => new ConsoleLogMessageWriter());
    }
}

...

internal class Logger: ContextObject, ILogger {
    [Injection] public ILogMessageWriter writer {get; set;}
    ...
}

internal class SingletoneLogger: ContextObject, ILogger {
    [Injection] public ILogMessageWriter writer {get; set;}
    ...
}

In the application context that is the next layer, we define another implementaton of ILogMessageWriter.

public class AppContextInitializer: IApplicationContextInitializer {
    public void Initialize(IDIContext context) {
        context.m().Bind<ILogMessageWriter>(() => new MyLogMessageWriter());
    }
}

Now let's see what dependencies the objects will have if our context is on the application level.

var log1 = context.Resolve<ILogger>();
var log2 = context.Resolve<IAnotherLogger>();

Here the log1 will depend on MyLogMessageWriter, as it picks the dependencies from the context we requested it from (and upper by the chain of prototype), even though it's defined in the parent context. But the log2 will have the dependency to the ConsoleLogMessageWriter! As the singletone is created on the same context, it's defined and picks the dependencies from this context (and upper by the prototype chain). That means that the first initiator of the sigletone instantiation will not be able to inject dependencies from its own context, as this singletone can be later accessed from different contexts as well.

So, what if I want to have a singletone logger in my application, that uses my version of the ILogMessageWriter? For this there is another important context feature called rebinding. Let's do the following:

public class AppContextInitializer: IApplicationContextInitializer {
    public void Initialize(IDIContext context) {
        context.m().Bind<ILogMessageWriter>(() => new MyLogMessageWriter());
        context.s().Rebind<IAnotherLogger>();
    }
}

Notice, that we don't specify any lambda-factory in the Rebind construction. That's because we tell MinDI to look in the parent contexts for the definition of ILogger and bind the same, just on this context with the specified lifetime.

Now if I write

var log2 = context.Resolve<IAnotherLogger>();

The resolved logger will have the dependency on MyLogMessageWriter. The singletone is redefined on this context, and thus gets the dependencies from the same context it's defined, as already explained above.

The rebinding is very powerfull feature, as it allows to change the lifetime of the objects. The standard approach is that the object is defined as abstract (multiple) in the class library context, as the class library should not impose the lifetime of the objects, and should let the user decide. The user can in the application context rebind all the necessary interfaces as singletone, thus decided which objects will be singletones in this application. The rebinding still doesn't need to know anything about the concrete implementation of the interface, allowing the library context to decide it.

The same principle is usefull when we have multi-layered context within the application, like some of the objects can be rebound with different lifetimes within some custom context.

Named and default dependencies

One interface can be defined multiple times in the context. To do this, there must be given the name to the binding:

public class AppContextInitializer: IApplicationContextInitializer {
    public void Initialize(IDIContext context) {
        context.m().Bind<ILogger>(() => new Logger1(), BindingName.For("logger1"), makeDefault: true);
        context.m().Bind<ILogger>(() => new Logger2(), BindingName.For("logger2"));
    }
}

This allows to resolve the different implementations of ILogger by providing different names:

var log = context.Resolve<ILogger>(BindingName.For("logger2"));

If no name is passed the default binding will be used (specified as makeDefault boolean).

This way we can have many configurable implementations that can be switched in the runtime. The binding name can be any object (ToString is used). The binding names can also be combined using special feature of BindingName helper.

In the user level we don't have access to the context, so to dynamically resolve the binding name, a dynamic injection is used:

[Injection] public IDynamicInjection<ILogger> loggerInjection { get; set; }
...
void MyMethid(string loggerType) {
    var logger = loggerInjection.Resolve(BindingName.For(loggerType)); 
    ...
}

In this example we use the loggerType string to dynamically obtain the necessary ILogger implementation in the runtime.

What's next

This is the end of this overview. Feel free to post any questions. The documentation and tutorials on MinDI will be added later. For now you can play with MinDI on GitHub here:

You also have access to the following demo projects:

Prune folder

Prune / clear folder with a strategy

Sometimes there is a typical task: automated builds are copied to a folder every hour. We need to schedule a task to remove extra files from time to time. But it's still good to do it with custom strategy. It can be simple: if file is older then N days, then keep only one last file per day, and if file is older than M days where M > N, then keep only one last file per month. I didn't find any similar script in the internet, so made one myself. It can be modified to achieve slightly different strategies when needed.

This scripts works if all the files or folders named following the format:

text1-yyyy-mm-dd-text2

where text1 and text2 can be anything not starting with digits, and text1 is the same for all the files in the folder. The yyyy-mm-dd is the build date in the file name. The date must be the part of the file name, because sorting by date of creation is dangerous - it's easy to change the date by doing something with files on the server.

Read the beginning of the script for the usage example: prune.sh

Note, that you also need to install ruby for this script to work, because it was more convenient to use ruby for the cross-platform date calculations.

Let's talk about big D

I would like to start a serie of posts about the context-oriented dependency injection and IoC in C#. This is related to my IoC container framework called MinDI. Before introducing MinDI and talking about usage and implementation, let's take briefly theoretical and philosophycal aspects of it.

Coupling

I suppose you are familiar with the SOLID principles. If not yet, go now and read, and apply it in your programming life, and then come back. SOLID is a must have for any OO design.

So if you are still here, I guess you also have at least heard about IoC containers. The IoC container provides generally a nice way to resolve the big D problem of SOLID, and its good implementation encourages and makes it easier to apply all the other biggies as well :) Here and in the next posts we will talk about some aspects of Dependency Inversion and Injection, and about IoC containers, and Context-oriented programming and more.

So, to begin, the largest problem that arises in a more or less complex project, is dependencies and coupling. Generally we should try to minimize the both between modules as much as possible.

The tight coupling means that class A directly knows class B. This doesn't allow the class A to be tested independently of class B, nor it allows to mock class B, and also if we need a different implementation of class B, we cannot easily substitute it. That also violates big O principle for class A, as it will require changes during the project lifetime. That's why big D says, that entities should not depend on each other, but on abstractions. Such abstraction in C# is called interface. Let's see a few examples of coupling:

a. Tight coupling where class A directly knows class B:

    class B {
        ...
    }

    class A {
        B b;
        void Action() {
            doSomethingWith(b);
        }
    }

As described above, this structure creates major problem and violates SOLID. Any medium and large projects, that are using tight coupling, become insupportable quite fast.

b. Another example of very bad using of tight coupling is Singleton anti-pattern.

class A {
    void Action() {
        B.Instance.DoSomething();
    }
}

This is terrible, because in such code many classes start to use singletons inside of the obscured calls, then singletons use other singletons, then everything is tight coupled and explodes when you need to change something. Unfortunately such a bad code can be encountered quite often.

Setting it free

I briefly tocuhed tight coupling, because it's not very interesting, you can find plenty information about it in the internet. And the solution to the mentioned problem can be formulated in simple statements:

  • Make all your code depend only on abstractions (in C# it's interfaces), what's called loose coupling
  • Make a separate code that configures the abstractions, by matching them with concrete implementations

That gives us the following main advantages:

  • Concrete implementations of entities can be replaced separately only in one place, without changing the rest of the code. Our code becomes refactoring friendly.
  • TDD-friendly too. Now it's very easy to mock separate interfaces, so the entities become very testable individually.
  • The modularity of the code is increased, and the usage of big L, big S, and big I is especially encouraged. We don't need to create big God classes anymore, as we can easily define the concrete dependencies for each entity. We can easily susbtitute the concrete implementations of the interfaces, keeping all the rest of the code without any knowledge of the fact, that we changed anything, as everything depends on interfaces, not on implementations. Of course, using of IoC/DI patterns cannot guarantee you will start magically write a good code, but it rather opens all the roads to write a nice code, if you understand what you are doing.
  • The dependencies become strictly defined in each concrete implementatons (we will talk about the Dependency Contract later), and not spread implicitly through the code.

As you can see, it's all about Agile development, where we need to make the code, that is equally responsible to new feature requests in the whole lifetime of the project. Where we also want to use TDD to increase the stability of our builds. And where we want to have some fun by making code that looks nice and makes us happy.

We need a framework

Using DI and coding to interface approach also opens some questions:

  • if we start depending only on interfaces, what is a good way of easily passing those dependencies to concrete entities?
  • if we avoid using singleton patterns, what is a good way to have several entity use the same instance? How do we control lifetime of the objects?
  • how can we organize a centralized place of defining the concrete implementations for each interface?
  • can those implementations be substituted dynamically during runtime as well?
  • can different parts of the code use different concrete implementatons of the same abstractions they depend on?

The IoC/DI framework is designed to solve all those problems. Without such a framework we would need to just pass dependencies manually to every created instance everywhere, and that would be very painfull. So without a framework or support from the language part, the DI pattern and coding to interface approach remain good only on the paper.

C# is an interface-based language, and should encourage people to use interfaces. If you look at the .NET API itself, they have interfaces for everything. But how many code did you usually encounter in C#, that uses interfaces as a base? That's because Microsoft didn't force to use DI in .NET, leaving the choice of DI, or another pattern to developer. You can use Spring.NET, Unity, or other popular DI libraries. But how many C# programmers ever heard about DI? In the game development, where I worked during 6 years, the most of the C# code I've seen was ugly enough. The better situation is in Java world, where there are more traditions on coding standards. But you have to chose the DI library in Java as well, and the choice sometimes is not easy, because most of the modern libraries are not context-oriented, lack some features, or little refactoring-friendly. While there should still be a choice, I believe, the built in DI/IoC should be part of any modern strongly-typed language. I think people should learn SOLID, DRY and DI principles right together with learning OOP at school. But our world sucks. This was a little complaining part. Now it's time to change the situation, and start using advantages of dependency injection if you don't use it yet.

A few words about Service Locator pattern

It's ugly. Was that enough words?

  • Instead of removing tight coupling, it creates a new tight coupling, where all the classes are singleton-like coupled to service locator itself.
  • It doesn't allow Dependency Contract, not allowing to make the dependencies explicit. That is a major advantage of DI, and we talk later about it.
  • It doesn't allow for any easy implementation of context-oriented and layered dependencies (I will reveal more on this topic later).

So basically, service locator is improved version of a singleton, that still smells.

I hope I have encouraged you to at least research information about DI/IoC containers, and maybe try some of them. The next articles will start telling about MinDI framework and will show some usage examples.

Read next article

Conque: vim-powered command line

I'm doing most actions in the command line. Recently I enabled vim mode in bash, which helped me to easily navigate in the command I'm typing. However, I was quite frustrated that whenever I need to select something from the console output, I need to use mouse. I would like to be able to use Vim for selecting the previous text in the console and pasting it into the command. And here Conque comes to help! It's a plugin that allows to run shell inside of vim, with full output into the current buffer.

Install it from here.

After it, run vim, and you can test it works with the following command:

:ConqueTerm bash

But of course we can improve it a little with a script. Add the following to the .bash_profile:

alias vimc='vim -c "ConqueTerm bash" -c "set nu"'

This will enable the vimc alias that will just start vim in conque mode, and will also enable the line numbers (even if you put set nu into your .vimrc, Conque ignores it for some reason).

As I'm on Mac, I also had to create a symlink to my .bash_profile, as Conque reads .bashrc by default. Run this from the home folder:

$ ln -s .bash_profile .bashrc

Also you might notice that Conque doesn't display the system user and folder name in the command line. For this just do the following trick (while still in the regular console):

$ echo "export PS1='$PS1'" >> ~/.bash_profile

This will add your regular settings of the shell prompt to the bash initialization script, and Conque will read it.

Voilà! Now just run:

$ vimc

And you can have a lot of fun with command line and your favorite vim! Just better don't try to run vim inside vim, looks a bit ugly :-P OMG

I have installed Nikola

I'm happy

I can now write all my blog posts in vim, using markdown, without distracting from the command line. Nikola rules! It allows you to store blog locally, commit it under git, and generate static html whenever you wish to deploy. You can use your favorite web hosting or github pages to host the blog. The only thing that frustrates me so far is the default styles of the code blocks, I wanna black background as in the terminal. It appeared Nikola supports many code themes just in the configuration file! So I have set a good black one called "monokai" to start with (might change it later).

CODE_COLOR_SCHEME = 'monokai'

in the conf.py

And my old blog on Blogger is still here. There is only a few posts.


For those who already installed Nikola, there might be helpfull some scripts. I have added this to my .bash_profile:

alias nstart="cd ~/nikola && source bin/activate && source .rc"

This will enable nstart alias that automatically opens your nikola path, and activates the virtual environment. Then it reads my .rc file that I put under nikola folder:

eval "`nikola tabcompletion`"

alias nnew="nikola new_post -f markdown"
alias nbuild="nikola build"
alias nserve="nikola serve -b"
alias ndeploy="nikola github_deploy"

alias nstop="deactivate && cd ~"

It adds some handy short aliases for build, serve and deploy, and the first line also enables the tabcompletion. When I finish working in the Nikola virtual environment I type nstop.

And ofc you should enable markdown in Nikola, because by default is uses somethin else (maybe what it uses is better, I didn't check, but I didn't either feel like learning one more not so popular markup standard).

To enable markdown, just add .md extension in POSTS and PAGES in conf.py like this:

POSTS = (
    ("posts/*.rst", "posts", "post.tmpl"),
    ("posts/*.txt", "posts", "post.tmpl"),
    ("posts/*.html", "posts", "post.tmpl"),
    ("posts/*.md", "posts", "post.tmpl")
)
PAGES = (
    ("pages/*.rst", "pages", "story.tmpl"),
    ("pages/*.txt", "pages", "story.tmpl"),
    ("pages/*.html", "pages", "story.tmpl"),
    ("pages/*.md", "pages", "story.tmpl")
)