Developing Software for Linux on Mainframe at Home

When developing for architectures that are not mainstream, developers often have challenges to get access to current systems that allow to work on a specific software. Especially when asking to fix an issue that shows up only on big endian hardware, the answer I repeatedly get is, that it’s hard to get access to an appropriate machine.

I just recently saw reports that told that the qemu project made substantial progress with supporting more current Mainframe hardware. Thus I thought, how hard could it be to create a virtual machine that allows to develop for s390x on local workstation hardware.

It turned out to be much easier than I thought. First, I did a standard install of tumbleweed for s390x, which went quite easy. But then I remembered that also the OBS supports emulators, and specifically qemu to run virtual machines.

I got myself a recent version of qemu-s390 from the Virtualization project:

osc repourls Virtualization
cd /etc/zypp/repos.d && \
zypper install --allow-vendor-change qemu-s390

After this, we are almost done. The next part is to checkout some package from OBS and try to build it:

mkdir ~/obs && cd $_
osc co openSUSE:Factory:zSystems cmsfs
cd openSUSE:Factory:zSystems/cmsfs

Now, you can run the build locally with the ‘osc’ command. You will have to specify the amount of memory you want to give to the resulting virtual machine, in my case here, it is 8GByte:

osc build --vm-type qemu --vm-memory=8192 standard s390x

Building locally is nice, but how about working on that software? That is where the fun begins. Typically you would be able to do a chroot to a local directory when building in a chroot environment. So, lets just do some beginner error and run osc with the chroot command:

osc chroot --vm-type qemu --vm-memory=8192 standard s390x

To my big surprise, that command did not complain. I just opened up a second terminal, and found that in the background some processes were heavily working, and after a while, I was actually placed into a shell.

To double check, I ran ‘cat /proc/cpuinfo’ and yes, I am placed into a s390x virtual machine!

Putting things together: All you have to do, to have a running s390x virtual machine crafted for some specific package with all latest updates is:

  1. Get the package source from OBS
  2. Run osc chroot

I think that is really great functionality. Thanks to the excellent OBS team that made this work out of the box without much hassle. Great Job!!!

Posted in Uncategorized | Tagged , , , , , , | 1 Comment

Port Forwarding with systemd

Using Port Forwarding with xinetd has served me well at many occasions. As time proceeds, new technologies show up that allow for similar functionality and others are deprecated. When reading about the obsoletion of xinetd in SLES15, I wondered if you could do the port forwarding also with systemd instead of xinetd.

To accomplish the same port forwarding like in Port Forwarding with xinetd you can proceed as follows.

With systemd, the procedure is twofold. First you have to create a socket that listens on a stream. The second part is to start a proxy service that connects to a remote port. Both are connected by means of their respective name


Just sticking with the previous example, let me use the following:

  • the private has the IP address range
  • the switch is configured with and its management port is 80
  • the jump host with access to both networks has the external address
  • use port 81 to access the switch over the jump host

The first thing we need is a .socket file:

# cat /etc/systemd/system/http-switch.socket


This socket must be connected to a proxy by means of the service name:

# cat /etc/systemd/system/http-switch.service
Description=Remote Switch redirect



After adding these files, the service can be enabled and started with the following commands:

systemctl enable http-switch.socket
systemctl enable http-switch.service
systemctl start http-switch.socket
systemctl start http-switch.service

The previous example is just a very basic one. Especially with the socket file, there is lots and lots of parameters and options available. For more information, see

man 5 systemd.socket
man 5 systemd.service

Posted in Networking, openSUSE, systemd, xinetd | 3 Comments

z/VM SSI: make your Linux relocatable

Virtualization systems like KVM, Xen, or z/VM offer the possibility to move running guests from one physical server to another one without service interruption. This is a cool feature for the system administrator because he can service a virtualization host without shutting down any workload on his cluster.

Before doing such a migration, the system perform a number of checks. Especially z/VM is very strict with this, but also gives high confidence that nothing bad happens with the workload. Unfortunately, the default system you get when running linux on z/VM has a number of devices attached, that prevent z/VM from relocating that guest to a different node. A typical message would look like this:

HCPRLH1940E LINUX001 is not relocatable for the following reason(s):
HCPRLI1996I LINUX001: Virtual machine device 0191 is a link to a local minidisk

For some of the devices it is obvious to the experienced z/VM admin that he can detach them. However some of the devices might also be in use by Linux, and it would definitly confuse the system when just removing the device. Therefore, the z/VM admin has to ask the person reponsible for Linux if it is ok to remove that device. When talking about 10 guests, this might be ok, but when talking about lots and lots of servers and many different Stakeholders, this can get quite painful.

Starting with SLES12 SP2, a new service called “virtsetup” sneaked into the system that can ease this task a lot. When enabled, it removes all the unneeded CMS disks from the guest and thus prepares the guest for live guest relocation.

How to run this service:
# systemctl enable virtsetup
# systemctl start virtsetup

Thats basically everything you have to do for a default setup. If you want some specific disk untouched, just have a look at “/etc/sysconfig/virtsetup”. This is the file where this service is configured.

Enabling this service is not a big deal for the single machine, but it makes a big difference for the z/VM admin. When this is enabled, most machines will simply be eligible for relocation without further action and thus allowing for continuous operation during service of a z/VM Node.

Posted in Mainframe, SLES12, systemd, z Linux, zVM | Leave a comment

Multiple Linux Consoles in z/VM

The standard method to access z/VM is using a 3215 terminal with a terminal emulator. With linux, the x3270 package provides a free emulator for these terminals.

One of the features of z/VM is, that you can define several consoles for a guest. This is very helpful if there are problems with a guest that affect the network connectivity. With z/VM you can even define multiple consoles that allow direct logon to the running guest.

By default, only one terminal is defined for z/VM guests. To define three additional 3270 consoles on a guest at the addresses 0020-0022, use the following commands:

cp define graf 20
cp define graf 21
cp define graf 22

These consoles can also be created online from linux, provided that you got sufficient privileges on the guest. To issue cp commands from linux, use the command vmcp instead of cp.

With SLES 12, several additional steps are needed to activate these consoles.

First, the devices must be made available to the system. This a twofold process:

  1. Remove the devices from the cio ignore list with the command
  2. cio_ignore -r 0.0.0020-0.0.0022
  3. Add the devices to /boot/zipl/active_devices.txt to make this change persistent.
    # cat /boot/zipl/active_devices.txt

The system automatically detects those devices. The corresponding serial devices are found below /dev/3270/tty[123]. Next, tell systemd to run a getty on these devices:

systemctl enable serial-getty@3270-tty1.service
systemctl enable serial-getty@3270-tty2.service
systemctl enable serial-getty@3270-tty3.service
systemctl start serial-getty@3270-tty1.service
systemctl start serial-getty@3270-tty2.service
systemctl start serial-getty@3270-tty3.service

To use the new consoles on a machine called LINUX065, direct the 3270 terminal emulator at z/VM. Instead of logging on as regular user, move the cursor to the COMMAND line and enter the following command:

dial linux065

To redisplay the logon prompt, you might want to press enter once.

When trying to logon to this console as root, you will find that it won’t let you. The reason for this is, that root logon is only allowed on previously defined consoles. The configuration file for this is /etc/securetty. Add the following lines to the end of this file:


After this, you can directly logon to the linux guest without the need for z/VM credentials.

If you want to avoid the need to redefine the consoles after a logoff of the guest, add the definition to PROFILE EXEC A of the guest.


Posted in 3270, Mainframe, SLES12, z Linux, zVM | Tagged , , , , , | 1 Comment

Online Enabling new DASD from a Linux LPAR on System z

When running a mainframe, there are reasons why you you want to add new control units and DASD disk devices. Assuming that you already added the hardware configuration to the system, you will find that a still running LPAR with linux will just not see any of the changes.

For example, if you have a configuration like the following in the IO configuration:

 CNTLUNIT CUNUMBR=2700,PATH=((CSS(0),41,43,4A,4D,50,51),               *
 IODEVICE ADDRESS=(2700,224),CUNUMBR=(2700),STADET=Y,UNIT=3390B,       *
 IODEVICE ADDRESS=(27E0,32),CUNUMBR=(2700),STADET=Y,UNIT=3390A,        *

The configuration on DS8000 with dscli would look like the following:

dscli> lslcu
Date/Time: July 17, 2013 2:50:45 PM CEST IBM DSCLI Version: DS: IBM.XXXX-XXXXXXX
ID Group addrgrp confgvols subsys conbasetype
06     0 0              36 0x0004 3990-6

Several disks and alias devices have been configured already on logical control unit 6 of DS8000. The alias devices are needed for the HyperPAV feature of the DS8000. :

dscli> lsckdvol -lcu 06
Date/Time: July 17, 2013 2:56:25 PM CEST IBM DSCLI Version: DS: IBM.XXXX-XXXXXXX
Name ID   accstate datastate configstate deviceMTM voltype   orgbvols extpool cap (cyl)
-    0600 Online   Normal    Normal      3390-9    CKD Base  -        P12         27825
-    0601 Online   Normal    Normal      3390-9    CKD Base  -        P14         27825
-    0602 Online   Normal    Normal      3390-3    CKD Base  -        P12          3339
-    0603 Online   Normal    Normal      3390-3    CKD Base  -        P14          3339
-    0604 Online   Normal    Normal      3390-9    CKD Base  -        P12         10017
-    06E0 -        -         -           -         CKD Alias 0600     -               0
-    06FF -        -         -           -         CKD Alias 0600     -               0

When using z/VM, the only thing to be done when you want to activate the devices is a vary online 2700-2704 27E0-27FF. However from a linux in LPAR mode, there is no such command available. Even after activating the devices from z/VM they would not be visible inside the linux LPAR. To check this, you can use the command lscss | grep '0\.0\.2700'.

The solution to make the devices available without rebooting the linux is to vary online one of the chpids that are already online.  If you look at the IOCDS, it shows that there are six chpids online: 41,43,4A,4D,50,51. In our case, these are just shared for all DASD devices and are also used for other device ranges. Thus they are just online. This can be seen with the following command:

# lscss | grep 0.0.2600
0.0.2600 0.0.01e6 3390/0c 3990/e9 fc fc ff 41434a4d 50510000

The numbers at the end just represent the use chipids.  To activate the chpid with number 41, use the following command:

# chchp -v 1 41
Vary online 0.41... done.

After this, the available disks can be checked again:

# lscss | grep '0\.0\.27'
0.0.2700 0.0.02e6 3390/0c 3990/e9 fc fc 2b 41434a4d 50510000
0.0.2701 0.0.02e7 3390/0c 3990/e9 fc fc 13 41434a4d 50510000
0.0.2702 0.0.02e8 3390/0a 3990/e9 fc fc 07 41434a4d 50510000
0.0.2703 0.0.02e9 3390/0a 3990/e9 fc fc 83 41434a4d 50510000
0.0.2704 0.0.02ea 3390/0c 3990/e9 fc fc 43 41434a4d 50510000

Now, the disks on control unit 2700 are also visible on this LPAR. From that point, it is easy to just configure the disks for linux with yast2 dasd or the commandline utility dasd_configure.

Posted in block devices, DS8000, Mainframe, Uncategorized, z Linux | Leave a comment

Port Forwarding with xinetd

In some network environments, where for example administration lans or some private lans are deployed, it might still be necessary to access a specific port of a machine inside that lan from the outside. Commonly, you would have to access a jump host and from there you would be able to reach the respective machine.

In our case, we had to reach the management port of a switch in a private lan. For example:

  • the private has the IP address range
  • the switch is configured with and its management port is 80
  • the jump host with access to both networks has the external address

To access the switch directly at address with port 81, you can configure xinetd on the jump host with the following configuration:

# cat /etc/xinetd.d/http-switch
service http-switch
 disable = no
 type = UNLISTED
 socket_type = stream
 protocol = tcp
 wait = no
 redirect = 80
 bind =
 port = 81
 user = nobody

After reloading (or starting if not yet done so) xinetd, you can reach the switch by pointing your browser to

chkconfig xinetd on
rcxinetd restart

The same principle can also be used when forwarding e.g. ssh ports of machines.

Posted in Networking, openSUSE, xinetd | Leave a comment

libvirt: chardev: opening backend “pty” failed: Permission denied

Recently I found myself in front of a strange problem that prevented me from creating new virtual machines with libvirt on KVM. Everytime I tried to create a virtual machine, I got a message similar to this:

Error: internal error Process exited while reading console log output: chardev: opening backend "pty" failed: Permission denied

Interestingly, directly after a reboot of the host, the same guest configuration would simply work. I did some searches in the internet and found, that there only view other people had this same problem, but I could not find a solution.

After tracing libvirtd and pestering some of my colleagues, I found that it actually could not access /dev/pts correctly. It turned out, that some change root environment also mounted /dev/pts although not with the right mount parameters. This had the effect, that the original /dev/pts also was remounted with the wrong mounting parameters.

So, to solve this issue, you need to

  1. find who is mounting /dev/pts in a wrong way and correct it
  2. remount /dev/pts correctly

The remount can be done with the following command:

mount -n -t devpts -o remount,mode=0620,gid=5 devpts /dev/pts

After this, libvirtd will be able again to access the device and work as desired.

Posted in KVM, libvirt | Tagged | Leave a comment

Persistent IUCV Network Devices

On mainframes, Inter User Communication Vehicle provides a means to exchange data between two guests in z/VM. Some time ago, this has been one of the preferred networking methods between two Users (virtual machines) on z/VM. From a linux perspective, IUCV is not a supported networking method anymore, although it actually still works quite nicely.

Setup as a pointopoint connection, IUCV needs a special machine that is run as a pointopoint partner and routes to the rest of the network if needed. With linux, it is quite easy to setup IUCV interfaces. The problems arise when you have more than one IUCV interface and must make sure that the IP configuration in /etc/sysconfig/network/ifcfg-iucv* is setup for the correct User.

In SLES11 and later, the hardware configuration of IUCV interfaces is done with udev rules. For each available connection, there is an extra file with a ruleset below /etc/udev/rules.d. By default, such a rules file looks like this:

# cat /etc/udev/rules.d/51-iucv-LINUX001.rules
ACTION=="add", SUBSYSTEM=="subsystem", KERNEL=="iucv", RUN+="/sbin/modprobe netiucv"
ACTION=="add", SUBSYSTEM=="drivers", KERNEL=="netiucv", ATTR{connection}="LINUX001"

The issue is, that during network device setup, all of the devices are simply setup as they are found. This is unfortunately not persistent and commonly results in connecting a User (virtual machine) with the IP address of a completely different machine. In the end, networking is simply broken.

For example, if you look at the  netiucv0 user, the following is found:

# cat /sys/devices/iucv/netiucv0/user

However the actual device is configured in /etc/sysconfig/network/ifcfg-iucv36. To solve this, a special iucv device below the netiucv device must be configured. This is a task for udev. The above udev rule needs an extra line at the end:

ACTION=="add", SUBSYSTEM=="subsystem", KERNEL=="iucv", RUN+="/sbin/modprobe netiucv"
ACTION=="add", SUBSYSTEM=="drivers", KERNEL=="netiucv", ATTR{connection}="LINUX001"
ACTION=="add", SUBSYSTEM=="net", KERNEL=="iucv*", SUBSYSTEMS=="iucv", ATTRS{user}=="LINUX001", NAME="iucv36"

After this, the netiucv0/user is still LINUX001. In this case, an extra iucv36 is configured like this:

# cat /sys/devices/iucv/netiucv0/net/iucv36/device/user

and now, the iucv36 device, like it is found in /proc/net/dev is the one that is configured with ifcfg-iucv36, really uses LINUX001 as pointopoint partner. For the sake of completeness, here is the configuration as it is found in /etc/sysconfig/network/:

# cat ifcfg-iucv36

Note, that it is possible to use the same IPADDR for all of the configured IUCV interfaces. Only the pointopoint partners that are configured with REMOTE_IPADDR must have their own unique addresses.

When configuring the partner User, IPADDR and REMOTE_IPADDR are swapped of course.

Posted in Mainframe, Networking, udev, z Linux | Tagged , , | Leave a comment

DRBD and Network Restarts

Using drbd as a simple and reliable alternative to distributed block devices is quite common. Especially in primary/primary mode, it provides the possibility to host block devices for virtual machines on two different hosts.

However, there is one annoyance that any active user stumbles over at some point in time. After a network restart, it may happen, that the devices switch to standalone mode and do not even try to reconnect to their peer. The reason for this is, that when doing a network restart, the device is shut down for a short time. DRBD itself has no means to wait for hotplugging devices, and thus just cuts the network connection in that case.

I know of two methods to solve that issue on the operating system side.

  1. Create a script in /etc/sysconfig/network/scripts/ifup.d that contains the necessary code to reconnect the drbd device to its peer
  2. The easiest way I know about is to switch the network interface to startmode nfsroot.

To accomplish this, edit the configuration file of the device that is used to connect to the peer, like e.g. /etc/sysconfig/network/ifcfg-eth0 and change the line




This changes the behavior of the networking scripts to not shut down the interface during a network stop or restart event. However, when using this method, I still would recommend monitoring the connection state of drbd in /proc/drbd.

Posted in block devices, DRBD, Networking | Tagged | 1 Comment

Xen or KVM

Since little more than half a year, I am in the process of installing a new virtualization Platform. One of the hardest decisions to make was if we should use Xen or go with KVM. We already have Xen in production and I know that it works well. From KVM we expect, that it will be growing faster then Xen and be the right thing on the long run.

The machines that I have as hosts are quite powerful. They are 48 Core AMD Opteron with 256 GByte of memory, and FCoE based Storage devices for the guests. We are using a converged network where both, FC and Ethernet, go over the same redundant 10GBit ethernet line. Storage is external FC storage from different devices.

The most important features that we need for such a platform are these:

  • Stability
  • Performance
  • Tools

After doing a number of tests, it is obvious that both systems are stable. I did not encounter crashes related to the hypervisor technology.

Performance is also an interesting point. Especially the speed of block and network devices is not the best when using virtualized guests. This holds true for both, KVM and Xen. Note, that comparing CPU or Memory performance in standard environments is not very useful. Even if one of the systems performs slightly better, both are very close to hardware speed in terms of CPU and Memory. However outbound connectivity is an issue for both.

One exception is when you invest some more effort and use the new NUMA features provided with the latest Xen. The IO performance of network devices was roughly 4 times the performance without using NUMA.

One of the drawbacks when using NUMA on Xen is, that you have to use the tool “xl” instead of “xm”. For some unknown reason, you can dump configurations from “xl” only in SXP format, but “xl” won’t let you start a guest from such a configuration. This renders the tool quite useless in a production environment.

This brings me to Tools. For me, Xen has the tools that are easier to operate than KVM. Especially life migration syntax is way easier on Xen. On the other hand, both are simple enough to be operated by experienced people. For those that do not like the command line, “libvirt” offers a number of graphical tools that can cope with both, Xen and KVM.

One thing to mention is, that with Xen you can enable a locking mechanism that prevents you from running the same guest on different Hosts. I have yet to find similar functionality on KVM.

Now let me add some words about issues I encountered. As I already told, we have Xen running productive and it works quite well. I also found the Xen developers being relatively responsive when some bug occurs. From my other blog entries you can see, that Xen also offers a number of debugging capabilities.

With KVM, there are two major issues I have right now

  • Life migrations are not safe in KVM. I repeatedly encountered block device corruptions when doing life migrations. This also holds true when using “cache=none” for the qemu configuration. Simple migrations still work without problems.
  • The networking inside a 10GBit environment behaves strangely. When connecting a Guest to a remote server I get connection speeds at about 30-40kByte/s. All the connections between the respective hobs in this environment work as expected (Guest -> Host, Host -> Server).


Both, KVM and Xen are usable if you do not need life migrations. OTOH life migration is an essential feature in a production environment. It enables you to service a Host without taking down the guests. If the life migration feature is not fixed until SLES11-SP2, I will have to return to Xen.

For the moment, KVM is not on par with Xen. However, in the long run I expect that KVM will gain momentum and eventually be the platform of choice. If I had to select a platform in a critical business environment today, I would go with Xen. On the long run, it might be better to go with KVM, but this depends on the further development of KVM.

The major development areas that will influence my decisions in future will be

  • IO Speed
  • Support of NUMA architectures
  • Support for HA features like “Remus” or “Kemari”

The race is still open…

Posted in block devices, KVM, Networking, openSUSE, Xen | 1 Comment