The DSI (Digital Society Institute, formerly CTIT) computing lab is an environment that consists of two compute clusters, several compute servers and a small OpenNebula cloud.

The first cluster is a hadoop cluster scheduled by YARN.
The second cluster is a hpc cluster scheduled by Slurm. You can monitor this cluster usage through the Slurm Dashboard.


Machines that you may log on to directly.

Note that you can only connect to these servers via SSH when you have a 130.89* address. Which means you have to be on campus or logged in via VPN.

Subsystem NodeName Alias purpose conditions
Hadoop/Yarn wegdam/oldemeule ctithead1/ctithead2 head node do not run heavy jobs directly on this machine
HPC/Slurm korenvliet head node do not run heavy jobs directly on this machine
Compute Server everloo Xeon Phi host different OS than other machines
Compute Server fmtmini OS X build/test environment FMT members only & only password less login

Cluster partitions.

We have several cluster partitions:

  • 8 nodes with FDR Infiniband
  • 32 nodes with QDR Infiniband
  • 12 nodes
  • 10 nodes with DDR InfiniBand
  • 2 nodes (AMD based)
  • 1 (big)node
  • 4 nodes with up to four NVidia gpu's
  • 1 node with two Xeon PHi's

The Hadoop cluster can be used for runnnig large scale computations,
but because of the nature of hadoop it should not be used for benchmarking.

To run benchmarks, use the HPC cluster.

For a complete listing of all hardware see Full list of hardware

Known Problems and Issues.

  • Infiniband RDMA has problems on the R415 nodes (contact: Wytse Oortwijn)
  • The R415's run two schedulers: YARN and SLURM. If the SLURM users are careless and bring down YARN/Hadoop then SLURM will be shut down on these nodes.


Who has access?

Research members of the FMT and DB groups are automatically granted access, as well as people with whom members of these groups cooperate.

Students first needs to try one of the free providers like Kaggle Kernels, Google Colab or Google Cloud.
  • Kaggle Kernels, allows 30hrs on weekly base (you also can connect you Kaggle Kernels to Google Cloud).
  • Google Colab, allows 12hrs per session, 25GB ram, 1x gpu with 12GB ram or 1x TPU with 64GB ram
  • Google Cloud, on creation of your account, you will get a $300 starting credit, this allows you to use 4 CPU cores, 16 GB RAM and 1 Tesla P4 GPU for +/- 700 hours. Create a new project, set up a Deep Learning VM from the marketplace and choose specifications for your system. For small student projects, this could be sufficient. For larger projects, you can either make a new account and copy the code, pay or move to our cluster.

To get access, you need to have an AD account of the University of Twente. All students and employees have such an account and they can be arranged for external persons. To get your AD account enabled for the CTIT cluster, you need to contact one of the contact persons.

Contact persons.


For staff, the username is probably your family name followed by your initials, for students its your student number starting with the "s".
CTIT Computing Lab does not store your password and we are unable to reset your password. If you require password assistance, please see the ICTS/LISA Servicedesk

Connecting with SSH

Access to CTIT Computing Lab resources is provided via secure shell (SSH) login. Most Unix-like operating systems (Mac OS X, Linux, etc) provide an ssh utility by default that can
be accessed by typing the command ssh in a terminal window.

You can login to one of the head nodes.

Logging in to ctithead1(/wegdam) or ctithead2(/oldemeule) will connected you to a “login node” of the Hadoop/Yarn Cluster.
Logging in to korenvliet will connect you to the "login node" of the HPC/SLURM Cluster.

Note : Login nodes are not intended to be used for computationally intensive work. For running computationally intensive programs, see the Usage pages .

Linux & Mac

To login to a headnode from a Linux or Mac computer, open a terminal and at the command line enter:

$ ssh <UserId>@<NodeName>

Note : replace UserId with your login name, nodename with one of the listed login nodes.


Windows users will first need to download an ssh client(PuTTY) which will allow you to interact with the remote Unix command line. When setting up your connection to the cluster in Putty, use the following information:

Hostname: <NodeName>
Port: 22
Username: <UserID>
Password: <Password>

Logging in on fmtmini

Logging in on the fmtmini machine can only be done with SSH keys. The reason being that OS X does not support Linux UIDs. To login, upload your SSH public key to your home directory on Wegdam. The use of the fmtmini is restricted to fmt members.

X11 applications

To display the gui applications on your desktop, you need to enable (trusted) X11 forwarding. When connecting through ssh, add the "-Y" parameter.

$ ssh -Y <UserId>@<NodeName>

Windows users needs to set this option in the Session Configuration (Connection->SSH->X11->Enable X11 forwarding).

  • Add the remote server to the X0.hosts file, to authorize displaying on your computer.

Data transfer (SCP)

Secure copy or SCP is a means of securely transferring computer files between a local host and a remote host. It is based on the Secure Shell (SSH) and Secure File Transfer (SFTP) protocols.

Command-Line Operation:

Most UNIX-like operating systems (Mac OS X, Linux, etc) provide a scp command which can be accessed from the command line. To transfer files from your local computer to your home directory on the cluster, open a terminal window and issue the command:

Single files: $ scp <some file> <UserID>@<NodeName>
Directories:  $ scp -r <some dir> <UserID>@<NodeName>

Or, to connect to a directory on the cluster (/deepstore/project2, for example) :

$ scp -r <some dir> <UserID>@<NodeName>

When prompted, enter your password.

Note : Thera are gui based tools as well like FileZilla, Cyberduck, nautilus(ubuntu), etc.

WinSCP GUI for Windows Clients:

WinSCP is a scp client software that can be used to move files to and from the cluster and a Windows machine. WinSCP can be obtained from
When setting up your connection to the cluster in WinSCP, use the following information:

Hostname: <NodeName>
Port: 22
Username: <UserID>
Password: <Password>

After connecting, if you are prompted to accept the server’s host key, select “yes.”
Upon successfully connecting, the main WinSCP window allows you to move files from your local machine (left side) to the cluster (right side).

Setting up


The first cluster machines run Ubuntu Server 14.04 LTS (Hadoop), The second cluster machines run Ubuntu Server 16.04 LTS or Ubuntu Server 20.04 LTS (HPC).
Some basic packages in the repositories have been installed. Additional software is available in the /software folder. See Local Software.


The following folders are available :
  • Network wide :
    1. Home folder : You can store small amount of data within your home folder (/home/username)
    2. Hadoop Cluster : Data for the Hadoop cluster can be placed on the Hadoop Distributed File System (HDFS)
    3. HPC cluster : Data for the HPC cluster can be placed on the central storage (/deepstore).
      This is available on request.
  • Local Node : You are allowed to create a local folder to storage temporary data (/local/projectname or /local/username).
    Use this space to store intermediate data, keep in mind this is temporary data and may be wiped if the disk gets to full.



For a quick start of the Hadoop software see Hadoop Quick Start and More Hands on Experience
for more information contact Jan Flokstra.


See the HPC info page for more information.
for more information contact Geert Jan Laanstra.

Analysing experiments (not related to SLURM)

Also attached is "models.tar.gz". This is an archive of a large set of models we benchmarked LTSmin with. Furthermore we have attached "analyse-experiments.php". This script can be used to analyse std out and std err output from thousands of experiments in seconds. This script can be used as reference (it may or may not suit your needs). The main result of this script are CSV files with results of the experiments. Also some Latex code is generated to quickly include these CSV files in your Latex documents.

Mailing list

A mailing list for cluster users has been created on the UTwente list server: CTIT_CLUS_USERS.
For the HPC/SLURM cluster, two mailing lists are created, CTIT-Cluster-Users (all the users) and CTIT-Cluster-Managers (all the managers).

models.tar.gz - Large set of mCRL2, Promela and DVE models (474 KB) Jeroen Meijer, 25 Jun 2014 10:54

analyse-experiments.php Magnifier - PHP script to analyse output form LTSmin. (54.3 KB) Jeroen Meijer, 25 Jun 2014 10:54 (18.3 KB) Jeroen Meijer, 04 Jul 2014 14:08