Contents

  1. Introduction
  2. Environment description
  3. Configuration
  4. Launching the cluster
  5. How it works

Introduction

In this section, we'll provide a real life example of configuration setup for development purposes.

Let's say we're confronted to a problem setting up a development platform for the dev team. Every developer needs its own instance to debug tasks and workflows. One of the possible solutions would be to install standalone evQueue for each developer (on VM for example), but the maintenance wouldn't be easy.

Instead, we'll use a clustered environment on one machine using unix sockets and configuration file variables.

Environment description

For this setup, all developers share a common server.

Every developer has its own tasks directory /data/dev-login/evqueue-tasks in which we put tasks executed by evQueue.

We install only one evQueue instance that is system wide. This engine will be launched several times with different configurations to provide a dedicated instance to all developers.

Configuration

We will now use a special evQueue feature: it's possible to set environment variables that will be used to replace parts of the configuration file.

The configuration file can contain variables between braces. If this variable is found in environment variables, it will be replaced by the environment variable value.

We will use 2 variables that are computed by a bash script:

Now we will describe the modifications made to the standard configuration file to setup our development cluster.

First, for everything to work properly, we need a separate IPC queue for each instance:

core.ipc.qid=/data/{login}

The IPC queue ID will be computed dynamically from the directory inode. The directory MUST exist.

Next, we want each developer to have its own task directory:

core.wd=/data/{login}
processmanager.tasks.directory = /data/{login}/evqueue-tasks

Each developer should also have its own socket to contact the engine:

network.bind.path=/data/{login}/evqueue.socket

Avoid pid file from being overridden:

core.pidfile=/data/{login}/evqueue.pid

Last thing is the cluster configuration:

cluster.node.name={login}
cluster.nodes={cluster}

Launching the cluster

Now we have everything ready. We need a bash script to set variables and launch the cluster instances.

			# Discover users
			USERS=`ls /data/*`

			# Build cluster description
			CLUSTER=""
			for i in $USERS
			do
				if [ "$CLUSTER"!= "" ]
				then
					CLUSTER="$CLUSTER,"
				fi

				CLUSTER="${CLUSTER}unix:///data/$i/evqueue.socket"
			done

			# Start engine instances
			for i in $USERS
			do
				login=$i cluster=$CLUSTER /usr/bin/evqueue --config /etc/evqueue.multiuser.conf --daemon
			done
		

As you can see, a new evQueue instance will be launched for each directory found under /data so every developer will have its own instance.

This bash script sets the {login} and {cluster} variables that will be used for replacement in the configuration file.

How it works

After you've launched the cluster, every developer will have its own evQueue node. The configuration of tasks and workflows will be shared by all instances so everyone can see every task and workflow.

The configuration frontend should be configured with all the cluster nodes (see clustering) but you only need to install it once. It will be shared by all the developers.

A developer who wants to launch a new workflow instance must contact its own node with the following connection string: unix:///data/dev-login/evqueue.socket. Of course dev-login must be replaced by the real login.

Tasks must be created with a relative path, as processmanager.tasks.directory will be prepended to this path. This way every developer will execute tasks located in its personal directory. Even when executing a workflow created by someone else, the tasks will be launched from the developer personal directory.

For code sharing (tasks) we use a git repository shared amongst all developers. That way, once a workflow is ready, tasks are pushed to the repository and every developer can use it.