Online Enabling new DASD from a Linux LPAR on System z

When running a mainframe, there are reasons why you you want to add new control units and DASD disk devices. Assuming that you already added the hardware configuration to the system, you will find that a still running LPAR with linux will just not see any of the changes.

For example, if you have a configuration like the following in the IO configuration:

 CNTLUNIT CUNUMBR=2700,PATH=((CSS(0),41,43,4A,4D,50,51),               *
               UNITADD=((00,256)),CUADD=6,UNIT=2107
 IODEVICE ADDRESS=(2700,224),CUNUMBR=(2700),STADET=Y,UNIT=3390B,       *
               UNITADD=00
 IODEVICE ADDRESS=(27E0,32),CUNUMBR=(2700),STADET=Y,UNIT=3390A,        *
               UNITADD=E0

The configuration on DS8000 with dscli would look like the following:

dscli> lslcu
Date/Time: July 17, 2013 2:50:45 PM CEST IBM DSCLI Version: 5.4.36.107 DS: IBM.XXXX-XXXXXXX
ID Group addrgrp confgvols subsys conbasetype
=============================================
...
06     0 0              36 0x0004 3990-6
...

Several disks and alias devices have been configured already on logical control unit 6 of DS8000. The alias devices are needed for the HyperPAV feature of the DS8000. :

dscli> lsckdvol -lcu 06
Date/Time: July 17, 2013 2:56:25 PM CEST IBM DSCLI Version: 5.4.36.107 DS: IBM.XXXX-XXXXXXX
Name ID   accstate datastate configstate deviceMTM voltype   orgbvols extpool cap (cyl)
=======================================================================================
-    0600 Online   Normal    Normal      3390-9    CKD Base  -        P12         27825
-    0601 Online   Normal    Normal      3390-9    CKD Base  -        P14         27825
-    0602 Online   Normal    Normal      3390-3    CKD Base  -        P12          3339
-    0603 Online   Normal    Normal      3390-3    CKD Base  -        P14          3339
-    0604 Online   Normal    Normal      3390-9    CKD Base  -        P12         10017
-    06E0 -        -         -           -         CKD Alias 0600     -               0
...
-    06FF -        -         -           -         CKD Alias 0600     -               0

When using z/VM, the only thing to be done when you want to activate the devices is a vary online 2700-2704 27E0-27FF. However from a linux in LPAR mode, there is no such command available. Even after activating the devices from z/VM they would not be visible inside the linux LPAR. To check this, you can use the command lscss | grep '0\.0\.2700'.

The solution to make the devices available without rebooting the linux is to vary online one of the chpids that are already online.  If you look at the IOCDS, it shows that there are six chpids online: 41,43,4A,4D,50,51. In our case, these are just shared for all DASD devices and are also used for other device ranges. Thus they are just online. This can be seen with the following command:

# lscss | grep 0.0.2600
0.0.2600 0.0.01e6 3390/0c 3990/e9 fc fc ff 41434a4d 50510000

The numbers at the end just represent the use chipids.  To activate the chpid with number 41, use the following command:

# chchp -v 1 41
Vary online 0.41... done.

After this, the available disks can be checked again:

# lscss | grep '0\.0\.27'
0.0.2700 0.0.02e6 3390/0c 3990/e9 fc fc 2b 41434a4d 50510000
0.0.2701 0.0.02e7 3390/0c 3990/e9 fc fc 13 41434a4d 50510000
0.0.2702 0.0.02e8 3390/0a 3990/e9 fc fc 07 41434a4d 50510000
0.0.2703 0.0.02e9 3390/0a 3990/e9 fc fc 83 41434a4d 50510000
0.0.2704 0.0.02ea 3390/0c 3990/e9 fc fc 43 41434a4d 50510000

Now, the disks on control unit 2700 are also visible on this LPAR. From that point, it is easy to just configure the disks for linux with yast2 dasd or the commandline utility dasd_configure.

Posted in block devices, DS8000, Mainframe, Uncategorized, z Linux | Leave a comment

Port Forwarding with xinetd

In some network environments, where for example administration lans or some private lans are deployed, it might still be necessary to access a specific port of a machine inside that lan from the outside. Commonly, you would have to access a jump host and from there you would be able to reach the respective machine.

In our case, we had to reach the management port of a switch in a private lan. For example:

  • the private has the IP address range 192.168.10.0/24
  • the switch is configured with 192.168.10.254 and its management port is 80
  • the jump host with access to both networks has the external address 10.10.10.1

To access the switch directly at address 10.10.10.1 with port 81, you can configure xinetd on the jump host with the following configuration:

# cat /etc/xinetd.d/http-switch
service http-switch
{
 disable = no
 type = UNLISTED
 socket_type = stream
 protocol = tcp
 wait = no
 redirect = 192.168.10.254 80
 bind = 10.10.10.1
 port = 81
 user = nobody
}

After reloading (or starting if not yet done so) xinetd, you can reach the switch by pointing your browser to http://10.10.10.1:81:

chkconfig xinetd on
rcxinetd restart

The same principle can also be used when forwarding e.g. ssh ports of machines.

Posted in Networking, openSUSE, xinetd | Leave a comment

libvirt: chardev: opening backend “pty” failed: Permission denied

Recently I found myself in front of a strange problem that prevented me from creating new virtual machines with libvirt on KVM. Everytime I tried to create a virtual machine, I got a message similar to this:

Error: internal error Process exited while reading console log output: chardev: opening backend "pty" failed: Permission denied

Interestingly, directly after a reboot of the host, the same guest configuration would simply work. I did some searches in the internet and found, that there only view other people had this same problem, but I could not find a solution.

After tracing libvirtd and pestering some of my colleagues, I found that it actually could not access /dev/pts correctly. It turned out, that some change root environment also mounted /dev/pts although not with the right mount parameters. This had the effect, that the original /dev/pts also was remounted with the wrong mounting parameters.

So, to solve this issue, you need to

  1. find who is mounting /dev/pts in a wrong way and correct it
  2. remount /dev/pts correctly

The remount can be done with the following command:

mount -n -t devpts -o remount,mode=0620,gid=5 devpts /dev/pts

After this, libvirtd will be able again to access the device and work as desired.

Posted in KVM, libvirt | Tagged | Leave a comment

Persistent IUCV Network Devices

On mainframes, Inter User Communication Vehicle provides a means to exchange data between two guests in z/VM. Some time ago, this has been one of the preferred networking methods between two Users (virtual machines) on z/VM. From a linux perspective, IUCV is not a supported networking method anymore, although it actually still works quite nicely.

Setup as a pointopoint connection, IUCV needs a special machine that is run as a pointopoint partner and routes to the rest of the network if needed. With linux, it is quite easy to setup IUCV interfaces. The problems arise when you have more than one IUCV interface and must make sure that the IP configuration in /etc/sysconfig/network/ifcfg-iucv* is setup for the correct User.

In SLES11 and later, the hardware configuration of IUCV interfaces is done with udev rules. For each available connection, there is an extra file with a ruleset below /etc/udev/rules.d. By default, such a rules file looks like this:

# cat /etc/udev/rules.d/51-iucv-LINUX001.rules
ACTION=="add", SUBSYSTEM=="subsystem", KERNEL=="iucv", RUN+="/sbin/modprobe netiucv"
ACTION=="add", SUBSYSTEM=="drivers", KERNEL=="netiucv", ATTR{connection}="LINUX001"

The issue is, that during network device setup, all of the devices are simply setup as they are found. This is unfortunately not persistent and commonly results in connecting a User (virtual machine) with the IP address of a completely different machine. In the end, networking is simply broken.

For example, if you look at the  netiucv0 user, the following is found:

# cat /sys/devices/iucv/netiucv0/user
LINUX001

However the actual device is configured in /etc/sysconfig/network/ifcfg-iucv36. To solve this, a special iucv device below the netiucv device must be configured. This is a task for udev. The above udev rule needs an extra line at the end:

ACTION=="add", SUBSYSTEM=="subsystem", KERNEL=="iucv", RUN+="/sbin/modprobe netiucv"
ACTION=="add", SUBSYSTEM=="drivers", KERNEL=="netiucv", ATTR{connection}="LINUX001"
ACTION=="add", SUBSYSTEM=="net", KERNEL=="iucv*", SUBSYSTEMS=="iucv", ATTRS{user}=="LINUX001", NAME="iucv36"

After this, the netiucv0/user is still LINUX001. In this case, an extra iucv36 is configured like this:

# cat /sys/devices/iucv/netiucv0/net/iucv36/device/user
LINUX001

and now, the iucv36 device, like it is found in /proc/net/dev is the one that is configured with ifcfg-iucv36, really uses LINUX001 as pointopoint partner. For the sake of completeness, here is the configuration as it is found in /etc/sysconfig/network/:

# cat ifcfg-iucv36
BOOTPROTO='static'
BROADCAST=''
ETHTOOL_OPTIONS=''
IPADDR='192.168.0.127/24'
MTU=''
NAME=''
NETWORK=''
REMOTE_IPADDR='192.168.0.67'
STARTMODE='auto'
USERCONTROL='no'

Note, that it is possible to use the same IPADDR for all of the configured IUCV interfaces. Only the pointopoint partners that are configured with REMOTE_IPADDR must have their own unique addresses.

When configuring the partner User, IPADDR and REMOTE_IPADDR are swapped of course.

Posted in Mainframe, Networking, udev, z Linux | Tagged , , | Leave a comment

DRBD and Network Restarts

Using drbd as a simple and reliable alternative to distributed block devices is quite common. Especially in primary/primary mode, it provides the possibility to host block devices for virtual machines on two different hosts.

However, there is one annoyance that any active user stumbles over at some point in time. After a network restart, it may happen, that the devices switch to standalone mode and do not even try to reconnect to their peer. The reason for this is, that when doing a network restart, the device is shut down for a short time. DRBD itself has no means to wait for hotplugging devices, and thus just cuts the network connection in that case.

I know of two methods to solve that issue on the operating system side.

  1. Create a script in /etc/sysconfig/network/scripts/ifup.d that contains the necessary code to reconnect the drbd device to its peer
  2. The easiest way I know about is to switch the network interface to startmode nfsroot.

To accomplish this, edit the configuration file of the device that is used to connect to the peer, like e.g. /etc/sysconfig/network/ifcfg-eth0 and change the line

STARTMODE='<auto|manual|onboot>'

to

STARTMODE='nfsroot'

This changes the behavior of the networking scripts to not shut down the interface during a network stop or restart event. However, when using this method, I still would recommend monitoring the connection state of drbd in /proc/drbd.

Posted in block devices, DRBD, Networking | Tagged | 1 Comment

Xen or KVM

Since little more than half a year, I am in the process of installing a new virtualization Platform. One of the hardest decisions to make was if we should use Xen or go with KVM. We already have Xen in production and I know that it works well. From KVM we expect, that it will be growing faster then Xen and be the right thing on the long run.

The machines that I have as hosts are quite powerful. They are 48 Core AMD Opteron with 256 GByte of memory, and FCoE based Storage devices for the guests. We are using a converged network where both, FC and Ethernet, go over the same redundant 10GBit ethernet line. Storage is external FC storage from different devices.

The most important features that we need for such a platform are these:

  • Stability
  • Performance
  • Tools

After doing a number of tests, it is obvious that both systems are stable. I did not encounter crashes related to the hypervisor technology.

Performance is also an interesting point. Especially the speed of block and network devices is not the best when using virtualized guests. This holds true for both, KVM and Xen. Note, that comparing CPU or Memory performance in standard environments is not very useful. Even if one of the systems performs slightly better, both are very close to hardware speed in terms of CPU and Memory. However outbound connectivity is an issue for both.

One exception is when you invest some more effort and use the new NUMA features provided with the latest Xen. The IO performance of network devices was roughly 4 times the performance without using NUMA.

One of the drawbacks when using NUMA on Xen is, that you have to use the tool “xl” instead of “xm”. For some unknown reason, you can dump configurations from “xl” only in SXP format, but “xl” won’t let you start a guest from such a configuration. This renders the tool quite useless in a production environment.

This brings me to Tools. For me, Xen has the tools that are easier to operate than KVM. Especially life migration syntax is way easier on Xen. On the other hand, both are simple enough to be operated by experienced people. For those that do not like the command line, “libvirt” offers a number of graphical tools that can cope with both, Xen and KVM.

One thing to mention is, that with Xen you can enable a locking mechanism that prevents you from running the same guest on different Hosts. I have yet to find similar functionality on KVM.

Now let me add some words about issues I encountered. As I already told, we have Xen running productive and it works quite well. I also found the Xen developers being relatively responsive when some bug occurs. From my other blog entries you can see, that Xen also offers a number of debugging capabilities.

With KVM, there are two major issues I have right now

  • Life migrations are not safe in KVM. I repeatedly encountered block device corruptions when doing life migrations. This also holds true when using “cache=none” for the qemu configuration. Simple migrations still work without problems.
  • The networking inside a 10GBit environment behaves strangely. When connecting a Guest to a remote server I get connection speeds at about 30-40kByte/s. All the connections between the respective hobs in this environment work as expected (Guest -> Host, Host -> Server).

Resume:

Both, KVM and Xen are usable if you do not need life migrations. OTOH life migration is an essential feature in a production environment. It enables you to service a Host without taking down the guests. If the life migration feature is not fixed until SLES11-SP2, I will have to return to Xen.

For the moment, KVM is not on par with Xen. However, in the long run I expect that KVM will gain momentum and eventually be the platform of choice. If I had to select a platform in a critical business environment today, I would go with Xen. On the long run, it might be better to go with KVM, but this depends on the further development of KVM.

The major development areas that will influence my decisions in future will be

  • IO Speed
  • Support of NUMA architectures
  • Support for HA features like “Remus” or “Kemari”

The race is still open…

Posted in block devices, KVM, Networking, openSUSE, Xen | 1 Comment

Migrating a Xen VM to KVM on openSUSE

Xen and KVM are the two major virtualization techologies that are freely available on linux. Although they are quite comparable performance wise, it still may be interesting to convert a Xen virtual machine to a KVM virtual machine.

Xen and KVM both use very similar images. However, there are some subtle differences in the setup:

  1. Xen block devices use the names “xvd?” where KVM uses “vd?”.
  2. The serial device in Xen is “xvc0″ while on KVM it is “ttyS0″.
  3. Xen does not use the bootloader from the image but directly accesses the boot directory while KVM really uses the bootmanager.
  4. The modules that are needed for block devices are different.
  5. Although virsh supports both, Xen and KVM, the XML configuration is still somewhat different.

The easiest way would be to just install the necessary packages and do the needed modifications on a running Xen guest, however, if you don’t have your Xen host anymore, you would be busted. Therefore, lets do the migration of an image just on the KVM host.

First, make the image accessible with “kpartx”. To do this run the command

> kpartx -a disk0.raw -v
add map loop0p1 (253:1): 0 319488 linear /dev/loop0 2048
add map loop0p2 (253:2): 0 16435200 linear /dev/loop0 321536

Now, determine which one is a real file system:

> lsblk -f /dev/mapper/loop0p?
NAME           FSTYPE LABEL MOUNTPOINT
loop0p1 (dm-1) swap
loop0p2 (dm-2) ext3

Obviously the device “/dev/mapper/loop0p2″ is our root file system that we need to access. Lets mount it and add all the needed devices:

mount /dev/mapper/loop0p2 /mnt
mount -o bind /dev /mnt/dev

Now, copy the needed kernel to the file system and do a “chroot” there:

cp kernel-default.rpm kernel-default-base.rpm /mnt/tmp
chroot /mnt
mount /sys
mount /proc

Next, update several configuration files:

  1. /etc/inittab : comment the line starting with S0 and containing xvc0
  2. /etc/inittab : uncomment line starting with S0 and containing ttyS0. Change the speed to 115200 if needed.
  3. /etc/securetty : remove xvc0 and add ttyS0
  4. /etc/sysconfig/kernel : remove modules starting with xen from “INITRD_MODULES” and add “virtio_blk virtio” instead.
  5. /etc/fstab : remove the “x” from “/dev/xvda” (and possibly more needed block devices)
  6. /boot/grub/device.map : change from “/dev/xvda” to “/dev/vda”
  7. /boot/grub/menu.lst :  comment line starting with gfxmenu
  8. /boot/grub/menu.lst : change the kernel and initrd lines to contain the kernel starting with “vmlinuz” and the default initrd as available in “/boot”.
  9. /boot/grub/menu.lst : fix the kernel parameters to contain the right root and console device, similar to: “root=/dev/vda2 console=ttyS0″.

Now, it is time to install the kernel:

rpm -Uhv /root/kernel-default.rpm /root/kernel-default-base.rpm

The only remaining task now is running “mkinitrd”. There will show up some error messages about not having the right root device available, which is correct. But the command commonly will work anyway.

To finish the work on the image, only some cleanup is needed:

  1. umount /sys
  2. umount /proc
  3. exit
  4. umount /mnt/dev
  5. umount /mnt
  6. kpartx -d disk0.raw

To start the image, the easiest way is to use “vm-install” and select activating an existing image “I have a disk or disk image …”. If it is just for testing, you can also use a command link this:

qemu-kvm \
-drive file=/kvm/images/disk0.raw,id=root,if=virtio \
-m 1024M -nographic

This should bring up your previous Xen image on a KVM machine.

Posted in KVM, openSUSE, Xen | Leave a comment