Changed the format of the gVirtuS configuration file

Starting from the changeset 266 gVirtuS doesn’t have anymore an XML configuration file, the new format is the simple .properties one.

A .properties file is just a collection of key-value separated by a colon, like in:

key: value

Comments are supported, a comment starts with the “#” character and ends at the end of the line.

I was feeling that XML format was to complex for the simple configuration stuff needed by gVirtuS, and in this way it isn’t needed anymore to link gVirtuS binaries against the expat XML parser library.

At the moment it is needed just a configuration entry for running gVirtuS: the communicator. The communicator configuration entry is more a structure than a simple value (i.e. it is needed to specify the type of the communicator and the parameters for the type), to pack all the needed informations in a single string it is used an URL like syntax, so the value of the communicator entry is something like:

type://paremeter_1:parameter_2:...:paremeter _n

A “real life” sample that uses an AfUnix communicator bounded on /tmp/gvirtus with read and write permissions for everyone is:

communicator: afunix:///tmp/gvirtus:0666

The communicators supported ad the moment, with respective configuration entry, are:

  • afunix, afunix://path:mode
  • shm, shm://
  • tcp, tcp://hostname:port
  • vmshm, vmshm://hostname:port

Starting from the same changeset is shipped a default configuration file that it is automatically installed in the default location.

Please tell me what do you think about this change!

VMShm, a mechanism for accessing POSIX shared memory from QEMU/kvm guests

VMShm is a mechanism that enables qemu virtual machines to access to POSIX shared memory objects created on the host OS.

VMShm make it possible to an user space application running in a virtual machine to map up to 1M of a POSIX shared memory object from the host OS.

It can be used as a base to build up an high performance communication channel between host and guest(s) OSes.

Read more of this post

gVirtuS 01-beta2 released, supporting cudatoolkit 3.1 and 3.2

From this release (01-beta2) we support cudatoolkit >= 3.1. The legacy compatibility for cudatoolkit versions older than 3.1 is not guaranteed.

In this release there are also some minor changes and bugfixes.
The frontend now is named “”, so it is not needed anymore to rename it or to preload it.
The AfUnix communicator now has a new (optional) option, “mode” for setting the permission on the socket file (0660 for default).

Go to the Project Page and to the Download Page.

Writing CUDA applications using the D programming language

On the release note of Fedora 14 there is the introduction of the support for developing using the D programming language, so I’ve readed something on that language and I feel that probably I can actually use it, the main interesting feature for me is the simple interoperability with code written in C.

So just for testing purpose I decided to write a small CUDA (I’m pretty sure that you already know what is cuda) application using the D language.

Of course is not possible to write CUDA kernel and device functions directly in a D module so what is needed is to implement kernels in a cuda source file (with a proper launcher) and then use the kernel from the D module.

The application that i’ve written converts a string in uppercase, reading the input from the command line (remember is just for testing).
Read more of this post

gVirtuS: the first beta release

We are proud to announce the first beta release of gVirtuS.

gVirtuS allows an instanced virtual machine to access GPGPUs in a transparent way, with an overhead  slightly greater than a real machine/GPGPU setup. gVirtuS is hypervisor independent, and, even though it currently virtualizes nVIDIA CUDA based GPUs.

The software, developed for research applications, is provided as it is.

We encourage using and testing it in order to collect useful feedbacks and suggestions.

Take a look to the gVirtuS project page:

VMSocket: a mechanism to expose UNIX Sockets in KVM Virtual Machines

VMSocket is a mechanism to expose UNIX Sockets (sockets in AF_UNIX domain) from the host operating system to the KVM’s virtual machines.

It provides a really fast communication channel between host and guests os. It can ben used for communication that needs to be fast, such as HPC (High Performance Computing) software solutions and so on.

The socket is bounded on the host os and the guests can connect to the socket using special drivers. At this time only the driver for linux is ready.

The sources of the patched qemu to support vmsocket are found on:

Read more of this post