Multiple Linux Consoles in z/VM

The standard method to access z/VM is using a 3215 terminal with a terminal emulator. With linux, the x3270 package provides a free emulator for these terminals.

One of the features of z/VM is, that you can define several consoles for a guest. This is very helpful if there are problems with a guest that affect the network connectivity. With z/VM you can even define multiple consoles that allow direct logon to the running guest.

By default, only one terminal is defined for z/VM guests. To define three additional 3270 consoles on a guest at the addresses 0020-0022, use the following commands:

cp define graf 20
cp define graf 21
cp define graf 22

These consoles can also be created online from linux, provided that you got sufficient privileges on the guest. To issue cp commands from linux, use the command vmcp instead of cp.

With SLES 12, several additional steps are needed to activate these consoles.

First, the devices must be made available to the system. This a twofold process:

  1. Remove the devices from the cio ignore list with the command
  2. cio_ignore -r 0.0.0020-0.0.0022
  3. Add the devices to /boot/zipl/active_devices.txt to make this change persistent.
    # cat /boot/zipl/active_devices.txt
    ...
    0.0.0020-0.0.0022

The system automatically detects those devices. The corresponding serial devices are found below /dev/3270/tty[123]. Next, tell systemd to run a getty on these devices:

systemctl enable serial-getty@3270-tty1.service
systemctl enable serial-getty@3270-tty2.service
systemctl enable serial-getty@3270-tty3.service
systemctl start serial-getty@3270-tty1.service
systemctl start serial-getty@3270-tty2.service
systemctl start serial-getty@3270-tty3.service

To use the new consoles on a machine called LINUX065, direct the 3270 terminal emulator at z/VM. Instead of logging on as regular user, move the cursor to the COMMAND line and enter the following command:

dial linux065

To redisplay the logon prompt, you might want to press enter once.

When trying to logon to this console as root, you will find that it won’t let you. The reason for this is, that root logon is only allowed on previously defined consoles. The configuration file for this is /etc/securetty. Add the following lines to the end of this file:

3270/tty1
3270/tty2
3270/tty3

After this, you can directly logon to the linux guest without the need for z/VM credentials.

If you want to avoid the need to redefine the consoles after a logoff of the guest, add the definition to PROFILE EXEC A of the guest.

 

Posted in 3270, Mainframe, SLES12, z Linux, zVM | Tagged , , , , , | Leave a comment

Online Enabling new DASD from a Linux LPAR on System z

When running a mainframe, there are reasons why you you want to add new control units and DASD disk devices. Assuming that you already added the hardware configuration to the system, you will find that a still running LPAR with linux will just not see any of the changes.

For example, if you have a configuration like the following in the IO configuration:

 CNTLUNIT CUNUMBR=2700,PATH=((CSS(0),41,43,4A,4D,50,51),               *
               UNITADD=((00,256)),CUADD=6,UNIT=2107
 IODEVICE ADDRESS=(2700,224),CUNUMBR=(2700),STADET=Y,UNIT=3390B,       *
               UNITADD=00
 IODEVICE ADDRESS=(27E0,32),CUNUMBR=(2700),STADET=Y,UNIT=3390A,        *
               UNITADD=E0

The configuration on DS8000 with dscli would look like the following:

dscli> lslcu
Date/Time: July 17, 2013 2:50:45 PM CEST IBM DSCLI Version: 5.4.36.107 DS: IBM.XXXX-XXXXXXX
ID Group addrgrp confgvols subsys conbasetype
=============================================
...
06     0 0              36 0x0004 3990-6
...

Several disks and alias devices have been configured already on logical control unit 6 of DS8000. The alias devices are needed for the HyperPAV feature of the DS8000. :

dscli> lsckdvol -lcu 06
Date/Time: July 17, 2013 2:56:25 PM CEST IBM DSCLI Version: 5.4.36.107 DS: IBM.XXXX-XXXXXXX
Name ID   accstate datastate configstate deviceMTM voltype   orgbvols extpool cap (cyl)
=======================================================================================
-    0600 Online   Normal    Normal      3390-9    CKD Base  -        P12         27825
-    0601 Online   Normal    Normal      3390-9    CKD Base  -        P14         27825
-    0602 Online   Normal    Normal      3390-3    CKD Base  -        P12          3339
-    0603 Online   Normal    Normal      3390-3    CKD Base  -        P14          3339
-    0604 Online   Normal    Normal      3390-9    CKD Base  -        P12         10017
-    06E0 -        -         -           -         CKD Alias 0600     -               0
...
-    06FF -        -         -           -         CKD Alias 0600     -               0

When using z/VM, the only thing to be done when you want to activate the devices is a vary online 2700-2704 27E0-27FF. However from a linux in LPAR mode, there is no such command available. Even after activating the devices from z/VM they would not be visible inside the linux LPAR. To check this, you can use the command lscss | grep '0\.0\.2700'.

The solution to make the devices available without rebooting the linux is to vary online one of the chpids that are already online.  If you look at the IOCDS, it shows that there are six chpids online: 41,43,4A,4D,50,51. In our case, these are just shared for all DASD devices and are also used for other device ranges. Thus they are just online. This can be seen with the following command:

# lscss | grep 0.0.2600
0.0.2600 0.0.01e6 3390/0c 3990/e9 fc fc ff 41434a4d 50510000

The numbers at the end just represent the use chipids.  To activate the chpid with number 41, use the following command:

# chchp -v 1 41
Vary online 0.41... done.

After this, the available disks can be checked again:

# lscss | grep '0\.0\.27'
0.0.2700 0.0.02e6 3390/0c 3990/e9 fc fc 2b 41434a4d 50510000
0.0.2701 0.0.02e7 3390/0c 3990/e9 fc fc 13 41434a4d 50510000
0.0.2702 0.0.02e8 3390/0a 3990/e9 fc fc 07 41434a4d 50510000
0.0.2703 0.0.02e9 3390/0a 3990/e9 fc fc 83 41434a4d 50510000
0.0.2704 0.0.02ea 3390/0c 3990/e9 fc fc 43 41434a4d 50510000

Now, the disks on control unit 2700 are also visible on this LPAR. From that point, it is easy to just configure the disks for linux with yast2 dasd or the commandline utility dasd_configure.

Posted in block devices, DS8000, Mainframe, Uncategorized, z Linux | Leave a comment

Port Forwarding with xinetd

In some network environments, where for example administration lans or some private lans are deployed, it might still be necessary to access a specific port of a machine inside that lan from the outside. Commonly, you would have to access a jump host and from there you would be able to reach the respective machine.

In our case, we had to reach the management port of a switch in a private lan. For example:

  • the private has the IP address range 192.168.10.0/24
  • the switch is configured with 192.168.10.254 and its management port is 80
  • the jump host with access to both networks has the external address 10.10.10.1

To access the switch directly at address 10.10.10.1 with port 81, you can configure xinetd on the jump host with the following configuration:

# cat /etc/xinetd.d/http-switch
service http-switch
{
 disable = no
 type = UNLISTED
 socket_type = stream
 protocol = tcp
 wait = no
 redirect = 192.168.10.254 80
 bind = 10.10.10.1
 port = 81
 user = nobody
}

After reloading (or starting if not yet done so) xinetd, you can reach the switch by pointing your browser to http://10.10.10.1:81:

chkconfig xinetd on
rcxinetd restart

The same principle can also be used when forwarding e.g. ssh ports of machines.

Posted in Networking, openSUSE, xinetd | Leave a comment

libvirt: chardev: opening backend “pty” failed: Permission denied

Recently I found myself in front of a strange problem that prevented me from creating new virtual machines with libvirt on KVM. Everytime I tried to create a virtual machine, I got a message similar to this:

Error: internal error Process exited while reading console log output: chardev: opening backend "pty" failed: Permission denied

Interestingly, directly after a reboot of the host, the same guest configuration would simply work. I did some searches in the internet and found, that there only view other people had this same problem, but I could not find a solution.

After tracing libvirtd and pestering some of my colleagues, I found that it actually could not access /dev/pts correctly. It turned out, that some change root environment also mounted /dev/pts although not with the right mount parameters. This had the effect, that the original /dev/pts also was remounted with the wrong mounting parameters.

So, to solve this issue, you need to

  1. find who is mounting /dev/pts in a wrong way and correct it
  2. remount /dev/pts correctly

The remount can be done with the following command:

mount -n -t devpts -o remount,mode=0620,gid=5 devpts /dev/pts

After this, libvirtd will be able again to access the device and work as desired.

Posted in KVM, libvirt | Tagged | Leave a comment

Persistent IUCV Network Devices

On mainframes, Inter User Communication Vehicle provides a means to exchange data between two guests in z/VM. Some time ago, this has been one of the preferred networking methods between two Users (virtual machines) on z/VM. From a linux perspective, IUCV is not a supported networking method anymore, although it actually still works quite nicely.

Setup as a pointopoint connection, IUCV needs a special machine that is run as a pointopoint partner and routes to the rest of the network if needed. With linux, it is quite easy to setup IUCV interfaces. The problems arise when you have more than one IUCV interface and must make sure that the IP configuration in /etc/sysconfig/network/ifcfg-iucv* is setup for the correct User.

In SLES11 and later, the hardware configuration of IUCV interfaces is done with udev rules. For each available connection, there is an extra file with a ruleset below /etc/udev/rules.d. By default, such a rules file looks like this:

# cat /etc/udev/rules.d/51-iucv-LINUX001.rules
ACTION=="add", SUBSYSTEM=="subsystem", KERNEL=="iucv", RUN+="/sbin/modprobe netiucv"
ACTION=="add", SUBSYSTEM=="drivers", KERNEL=="netiucv", ATTR{connection}="LINUX001"

The issue is, that during network device setup, all of the devices are simply setup as they are found. This is unfortunately not persistent and commonly results in connecting a User (virtual machine) with the IP address of a completely different machine. In the end, networking is simply broken.

For example, if you look at the  netiucv0 user, the following is found:

# cat /sys/devices/iucv/netiucv0/user
LINUX001

However the actual device is configured in /etc/sysconfig/network/ifcfg-iucv36. To solve this, a special iucv device below the netiucv device must be configured. This is a task for udev. The above udev rule needs an extra line at the end:

ACTION=="add", SUBSYSTEM=="subsystem", KERNEL=="iucv", RUN+="/sbin/modprobe netiucv"
ACTION=="add", SUBSYSTEM=="drivers", KERNEL=="netiucv", ATTR{connection}="LINUX001"
ACTION=="add", SUBSYSTEM=="net", KERNEL=="iucv*", SUBSYSTEMS=="iucv", ATTRS{user}=="LINUX001", NAME="iucv36"

After this, the netiucv0/user is still LINUX001. In this case, an extra iucv36 is configured like this:

# cat /sys/devices/iucv/netiucv0/net/iucv36/device/user
LINUX001

and now, the iucv36 device, like it is found in /proc/net/dev is the one that is configured with ifcfg-iucv36, really uses LINUX001 as pointopoint partner. For the sake of completeness, here is the configuration as it is found in /etc/sysconfig/network/:

# cat ifcfg-iucv36
BOOTPROTO='static'
BROADCAST=''
ETHTOOL_OPTIONS=''
IPADDR='192.168.0.127/24'
MTU=''
NAME=''
NETWORK=''
REMOTE_IPADDR='192.168.0.67'
STARTMODE='auto'
USERCONTROL='no'

Note, that it is possible to use the same IPADDR for all of the configured IUCV interfaces. Only the pointopoint partners that are configured with REMOTE_IPADDR must have their own unique addresses.

When configuring the partner User, IPADDR and REMOTE_IPADDR are swapped of course.

Posted in Mainframe, Networking, udev, z Linux | Tagged , , | Leave a comment

DRBD and Network Restarts

Using drbd as a simple and reliable alternative to distributed block devices is quite common. Especially in primary/primary mode, it provides the possibility to host block devices for virtual machines on two different hosts.

However, there is one annoyance that any active user stumbles over at some point in time. After a network restart, it may happen, that the devices switch to standalone mode and do not even try to reconnect to their peer. The reason for this is, that when doing a network restart, the device is shut down for a short time. DRBD itself has no means to wait for hotplugging devices, and thus just cuts the network connection in that case.

I know of two methods to solve that issue on the operating system side.

  1. Create a script in /etc/sysconfig/network/scripts/ifup.d that contains the necessary code to reconnect the drbd device to its peer
  2. The easiest way I know about is to switch the network interface to startmode nfsroot.

To accomplish this, edit the configuration file of the device that is used to connect to the peer, like e.g. /etc/sysconfig/network/ifcfg-eth0 and change the line

STARTMODE='<auto|manual|onboot>'

to

STARTMODE='nfsroot'

This changes the behavior of the networking scripts to not shut down the interface during a network stop or restart event. However, when using this method, I still would recommend monitoring the connection state of drbd in /proc/drbd.

Posted in block devices, DRBD, Networking | Tagged | 1 Comment

Xen or KVM

Since little more than half a year, I am in the process of installing a new virtualization Platform. One of the hardest decisions to make was if we should use Xen or go with KVM. We already have Xen in production and I know that it works well. From KVM we expect, that it will be growing faster then Xen and be the right thing on the long run.

The machines that I have as hosts are quite powerful. They are 48 Core AMD Opteron with 256 GByte of memory, and FCoE based Storage devices for the guests. We are using a converged network where both, FC and Ethernet, go over the same redundant 10GBit ethernet line. Storage is external FC storage from different devices.

The most important features that we need for such a platform are these:

  • Stability
  • Performance
  • Tools

After doing a number of tests, it is obvious that both systems are stable. I did not encounter crashes related to the hypervisor technology.

Performance is also an interesting point. Especially the speed of block and network devices is not the best when using virtualized guests. This holds true for both, KVM and Xen. Note, that comparing CPU or Memory performance in standard environments is not very useful. Even if one of the systems performs slightly better, both are very close to hardware speed in terms of CPU and Memory. However outbound connectivity is an issue for both.

One exception is when you invest some more effort and use the new NUMA features provided with the latest Xen. The IO performance of network devices was roughly 4 times the performance without using NUMA.

One of the drawbacks when using NUMA on Xen is, that you have to use the tool “xl” instead of “xm”. For some unknown reason, you can dump configurations from “xl” only in SXP format, but “xl” won’t let you start a guest from such a configuration. This renders the tool quite useless in a production environment.

This brings me to Tools. For me, Xen has the tools that are easier to operate than KVM. Especially life migration syntax is way easier on Xen. On the other hand, both are simple enough to be operated by experienced people. For those that do not like the command line, “libvirt” offers a number of graphical tools that can cope with both, Xen and KVM.

One thing to mention is, that with Xen you can enable a locking mechanism that prevents you from running the same guest on different Hosts. I have yet to find similar functionality on KVM.

Now let me add some words about issues I encountered. As I already told, we have Xen running productive and it works quite well. I also found the Xen developers being relatively responsive when some bug occurs. From my other blog entries you can see, that Xen also offers a number of debugging capabilities.

With KVM, there are two major issues I have right now

  • Life migrations are not safe in KVM. I repeatedly encountered block device corruptions when doing life migrations. This also holds true when using “cache=none” for the qemu configuration. Simple migrations still work without problems.
  • The networking inside a 10GBit environment behaves strangely. When connecting a Guest to a remote server I get connection speeds at about 30-40kByte/s. All the connections between the respective hobs in this environment work as expected (Guest -> Host, Host -> Server).

Resume:

Both, KVM and Xen are usable if you do not need life migrations. OTOH life migration is an essential feature in a production environment. It enables you to service a Host without taking down the guests. If the life migration feature is not fixed until SLES11-SP2, I will have to return to Xen.

For the moment, KVM is not on par with Xen. However, in the long run I expect that KVM will gain momentum and eventually be the platform of choice. If I had to select a platform in a critical business environment today, I would go with Xen. On the long run, it might be better to go with KVM, but this depends on the further development of KVM.

The major development areas that will influence my decisions in future will be

  • IO Speed
  • Support of NUMA architectures
  • Support for HA features like “Remus” or “Kemari”

The race is still open…

Posted in block devices, KVM, Networking, openSUSE, Xen | 1 Comment