5. Linux Tips & Howtos

Linux에 대한 기술적인 내용

Private cloud with IBM WAVE 체험

WAVE 기능 sample

IBM WAVE for z/VM이란 제품으로 LinuxONE 서버를 이용한 간단한 Private cloud 환경 만들어 보았습니다.

첨부된 파일 참고하셔서 Linux 각자 만드셔서 맘껏(^^;) 사용해 보시고,  Open source (Docker, Spark, node.js, MariaDB, MongDB등) 사용해 보시고 사용 후기나 성능 테스트 결과도 같이 공유 했으면 좋겠네요~

사용하시다 필요하신 사항이 있으시면 알려주시면 가능하면 더 추가해 드리겠습니다.

 

Advertisements

디스크 벤치마크 툴 bonnie++

리눅스의 디스크 성능 측정을 위해서 간단하게는 dd로 파일을 생성해 볼 수 있습니다.

# dd if=/dev/zero of=/tmp/disk-test bs=1024 count 1024

간략하게 설명하면,  /dev/zero파일을 원본으로 해서 /tmp에 disk-test라는 파일을 생성하는데 block size는 1024이고 1024개를 생성해서 1MB 파일이 만들어집니다. 더 큰 용량으로 만들려면  bs 나 count를 조절해서 파일을 생성하면 됩니다. 저는 보통 1-2GB 정도의 파일을 생성해서 파일 쓰기 속도를 측정해 봅니다.

조금 더 자세하고 다양한 테스트를 해 볼 필요가 있을 때는 디스크 벤치마크 프로그램을 이용할 수 있습니다.

bonnie++는 아주 오래된 디스크 벤치마크 프로그램인데 저는 5년 전에 한 고객사의 요청으로 BMT를 진행하면서 디스크 성능측정에 사용해 본 적이 있었고, 최근에 한 시스템의 디스크 성능이 궁금해져서 사용해 보았습니다. OS간의 디스크  IO성능 BMT나, 내장디스크와 외장 디스크의 속도차이, 디스크 구성/연결 방식,  LVM 구성시 stripe관련 옵션에 따른 차이 등을 테스트 해보고 싶을 때 유용한 프로그램입니다.

홈페이지는 http://www.coker.com.au/bonnie++/ 입니다. 이곳에서  소스 다운로드가 가능합니다.

설치는 컴파일 방식이므로 gcc와 g++이  설치되어 있어야 합니다. 플랫폼이나 배포판에 따라 필요한 패키지나 라이브러리가 있을 수 있으니 컴파일 시 에러를 보고 추가해 주어야 하는데 제가 테스트한 환경은 RHEL 5.8 on IBM system Z 입니다.

– 설치

1. 다운로드한 소스 압축 파일을 서버에 업로드하고 압축을 해제하면 디렉토리가 생성됩니다.

$ cd bonnie++-1.03
$ ls
bon_csv2html     bon_csv2txt.in  bonnie.8      bonnie++.spec     changelog.txt  copyright.txt  install.sh   readme.html    zcav.8
bon_csv2html.1   bon_file.cpp    bonnie++.8    bonnie++.spec.in  conf.h         credits.txt    Makefile     semaphore.cpp  zcav.cpp
bon_csv2html.in  bon_file.h      bonnie++.cpp  bon_suid.cpp      conf.h.in      debian         Makefile.in  semaphore.h
bon_csv2txt      bon_io.cpp      bonnie.h      bon_time.cpp      configure      forkit.cpp     port.h       sh.common
bon_csv2txt.1    bon_io.h        bonnie.h.in   bon_time.h        configure.in   forkit.h       port.h.in    sun

2. configure를 수행합니다.

$ ./configure
checking for g++... g++
checking for C++ compiler default output... a.out
checking whether the C++ compiler works... yes

– 이하생략 –

3. make ; make install을 실행하여 컴파일 및 설치를 진행합니다.

$ make
g++ -O2  -DNDEBUG -Wall -W -Wshadow -Wpointer-arith -Wwrite-strings -pedantic -ffor-scope   -c bon_io.cpp
g++ -O2  -DNDEBUG -Wall -W -Wshadow -Wpointer-arith -Wwrite-strings -pedantic -ffor-scope   -c bon_file.cpp
g++ -O2  -DNDEBUG -Wall -W -Wshadow -Wpointer-arith -Wwrite-strings -pedantic -ffor-scope   -c bon_time.cpp
In file included from /usr/lib/gcc/s390x-redhat-linux/4.1.2/../../../../include/c++/4.1.2/backward/algo.h:59,
from bon_time.cpp:22:

– 이하생략 –

$ make install
mkdir -p /app/bin /app/sbin
/usr/bin/install -c -s bonnie++ zcav zcav /app/sbin
/usr/bin/install -c bon_csv2html bon_csv2txt /app/bin
mkdir -p /app/man/man1 /app/man/man8
/usr/bin/install -c -m 644 bon_csv2html.1 bon_csv2txt.1 /app/man/man1
/usr/bin/install -c -m 644 bonnie++.8 zcav.8 zcav.8 /app/man/man8
/usr/bin/install: will not overwrite just-created `/app/man/man8/zcav.8' with `zcav.8'
make: *** [install] Error 1

저는 설치시에 zcav의 man 파일 설치 과정에서 문제가 발생했는데 실행과는 무관해서 해결하지는 않았습니다.

– 실행

$ ./bonnie++ -d /tmp -s 1024:512 -n 100 -m `hostname` -r 512 -u0:0 > `hostname`.ECKD_result.txt

bonnie에는 많은 옵션이 있는데 위에서 적용한 옵션은 /tmp 를 대상으로 테스트 파일 전체 용량은 1GB, 생성되는 파일의 단위는 512Byte, 파일 갯수는 100개, 메모리는 512MB를 사용하고, 이 테스트를 root 권한으로 실행하는 것으로 지정하였습니다. 이외에도 많은 옵션이 있습니다.

– 결과

Version  1.03       ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine   Size:chnk K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
lnxtest     1G:512 24095  86 92748  69 132762  86 28328  91 398001  88 +++++ +++
------Sequential Create------ --------Random Create--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
100 59403  92 +++++ +++ 61671  84 56627  85 +++++ +++ 70131  94
lnxtest,1G:512,24095,86,92748,69,132762,86,28328,91,398001,88,+++++,+++,100,59403,92,+++++,+++,61671,84,56627,85,+++++,+++,70131,94

테스트 결과는 위와 같이 출력되는데 Sequential, Random 입출력 테스트를 하고 각 항목에 대한 CPU사용률과 결과가 함께 출력이 됩니다. 이때  +로 표시되는 부분은 테스트 옵션이 부적절 하거나 측정이 부정확한 원인으로 결과를 표시할 수 없는 항목들입니다.

Linux Health Checker 설치/사용기

이전에 포스팅 된 Linux Health Checker 소개를 보고 실제로 설치/사용해 봤습니다.

테스트 환경은 RHEL 5.8 for system Z입니다.

설치용 RPM은 에러 발생!해서 SRC RPM 설치 했네요.
[root@lnxtest /tmp]$ rpm -ivh lnxhc-1.0-1.src.rpm

rpm rebuild 진행합니다.
[root@lnxtest /tmp]$ cd /usr/src/redhat/SRPMS/SPECS/
[root@lnxtest /usr/src/redhat/SPECS]$ ls
lnxhc.spec
[root@lnxtest /usr/src/redhat/SPECS]$ rpmbuild -ba lnxhc.spec
Executing(%prep): /bin/sh -e /var/tmp/rpm-tmp.42332
— 중간 생략
+ exit 0
[root@lnxtest /usr/src/redhat/SPECS]$ cd /usr/src/redhat/RPMSnoarch/
[root@lnxtest /usr/src/redhat/RPMS/noarch]$ ls
lnxhc-1.0-1.noarch.rpm

생성된 RPM 파일로 설치 합니다.

[root@lnxtest /usr/src/redhat/RPMS/noarch]$ rpm -ivh lnxhc-1.0-1.noarch.rpm
Preparing… ########################################### [100%]
1:lnxhc ########################################### [100%]

자.. 이제 실행해 보겠습니다.
실행해보니 시스템의 문제점을 분석해서
어떻게 설정해야 하는지 친절하게 안내까지 해 주네요.  잘 활용하면 상당히 유용하겠네요.

출력결과가 아주 깁니다.

[root@lnxtest /usr/src/redhat/RPMS/noarch]$ lnxhc run
Creating user directory ‘/root/.lnxhc’
Collecting system information
Changing user to ‘root’ for command ‘/sbin/multipath -ll’
Running checks (24 checks)
CHECK NAME HOST RESULT
============================================================================================================================================================
boot_zipl_update_required ………….. lnxtest SUCCESS
css_ccw_availability ………………. lnxtest SUCCESS
css_ccw_chpid …………………….. lnxtest SUCCESS
css_ccw_ignored_online …………….. lnxtest SUCCESS
css_ccw_no_driver …………………. lnxtest EXCEPTION-MED

>EXCEPTION css_ccw_no_driver.no_driver(medium)
One or more I/O devices are not associated with a device
driver: 0.0.0009

css_ccw_unused_devices …………….. lnxtest SUCCESS
dasd_zvm_nopav ……………………. lnxtest SUCCESS
fs_disk_usage …………………….. lnxtest SUCCESS
fs_inode_usage ……………………. lnxtest SUCCESS
init_runlevel …………………….. lnxtest SUCCESS
mm_oom_killer_triggered ……………. lnxtest SUCCESS
net_bond_dev_chpid ………………… lnxtest NOT APPLICABLE
net_hsi_tx_errors …………………. lnxtest NOT APPLICABLE
net_inbound_packets ……………….. lnxtest SUCCESS
net_qeth_buffercount ………………. lnxtest EXCEPTION-MED

>EXCEPTION net_qeth_buffercount.inefficient_buffercount(medium)
These network interfaces do not have the expected number of
buffers: eth0, eth1

ras_dump_on_panic …………………. lnxtest EXCEPTION-HIGH

>EXCEPTION ras_dump_on_panic.no_standalone(high)
The dump-on-panic function is not enabled

sec_non_root_uid_zero ……………… lnxtest SUCCESS
sec_services_insecure ……………… lnxtest SUCCESS
storage_invalid_multipath ………….. lnxtest SUCCESS
sys_sysctl_call_home ………………. lnxtest NOT APPLICABLE
sys_sysctl_panic ………………….. lnxtest SUCCESS
sys_sysinfo_cpu_cap ……………….. lnxtest NOT APPLICABLE
sys_tty_console_getty ……………… lnxtest SUCCESS
sys_tty_usage …………………….. lnxtest EXCEPTION-MED

>EXCEPTION sys_tty_usage.unused_ttys(medium)
These terminals are unused: /dev/ttyS, /dev/ttysclp
20 checks run, 4 exceptions found (use ‘lnxhc run –replay -V’ for details)
[root@lnxtest /usr/src/redhat/RPMS/noarch]$ lnxhc run –replay -V
Replaying check results generated on 2012-06-14 18:13:01
CHECK NAME HOST RESULT
============================================================================================================================================================
boot_zipl_update_required ………….. lnxtest SUCCESS
css_ccw_availability ………………. lnxtest SUCCESS
css_ccw_chpid …………………….. lnxtest SUCCESS
css_ccw_ignored_online …………….. lnxtest SUCCESS
css_ccw_no_driver …………………. lnxtest EXCEPTION-MED

>EXCEPTION css_ccw_no_driver.no_driver(medium)

SUMMARY
One or more I/O devices are not associated with a device
driver: 0.0.0009

EXPLANATION
One or more I/O devices cannot be used properly because they
not associated

with a device driver.

Possible reasons for this problem are that the required de-
vice driver module

has been unloaded, that an existing association between the
device and the

device driver has been removed, or that the device is not
supported.

The following I/O devices are not associated with a device
driver:

BUS ID DevType CU Type
0.0.0009 0000/00 3215/00

 

Each device has a device type and a control unit (CU) type.
Each device driver

provides a list of supported combinations of device type and
CU type. Linux

uses this information to associate devices with device
drivers. The sysfs

directories of devices with a device-driver association in-
clude a symbolic

link “driver”. This link points to the sysfs directory of the
associated

device driver.

To verify that an I/O device with bus ID <device_bus_id> is
not associated with

a device driver, confirm that there is no symbolic link
“driver” in the

following sysfs directory:

/sys/bus/ccw/devices/<device_bus_id>

SOLUTION
1. If the kernel module of the required device driver has
been unloaded,

load it again. For example, issue:

modprobe <module_name>

where <module_name> is the name of the required device driver
module.

You can use the “modinfo” command to find out which combina-
tions of device type

and CU type are supported by a device driver module.

2. Try to create the missing association of the I/O device
with its

device driver. For example, issue:

echo <device_bus_id> > /sys/bus/ccw/drivers/<module_name>/bind

Alternatively, try to create the association by issuing:

echo <device_bus_id> > /sys/bus/ccw/drivers_probe

3. Verify that the device is supported.

4. If you cannot establish an association between the I/O de-
vice and a

device driver, contact your support organization.

REFERENCE
For information about supported devices, see:

– The release notes of your distribution

– The applicable version of “Device Drivers, Features, and
Commands”.

You can find this publication at

http://www.ibm.com/developerworks/linux/linux390/documentation_dev.html

For information about investigating kernel modules, see the
“modinfo” man page.

css_ccw_unused_devices …………….. lnxtest SUCCESS
dasd_zvm_nopav ……………………. lnxtest SUCCESS
fs_disk_usage …………………….. lnxtest SUCCESS
fs_inode_usage ……………………. lnxtest SUCCESS
init_runlevel …………………….. lnxtest SUCCESS
mm_oom_killer_triggered ……………. lnxtest SUCCESS
net_bond_dev_chpid ………………… lnxtest NOT APPLICABLE
net_hsi_tx_errors …………………. lnxtest NOT APPLICABLE
net_inbound_packets ……………….. lnxtest SUCCESS
net_qeth_buffercount ………………. lnxtest EXCEPTION-MED

>EXCEPTION net_qeth_buffercount.inefficient_buffercount(medium)

SUMMARY
These network interfaces do not have the expected number of
buffers: eth0, eth1

EXPLANATION
The number of buffers of one or more network interfaces di-
verts from the specified rule. The most suitable number of
buffers for a particular interface depends on the available
memory. To allow for memory constraints, many Linux distribu-
tions use a small number of buffers by default. On Linux in-
stances with ample memory and a high traffic volume, this can
lead to performance degradation, as incoming packets are
dropped and have to be resent by the originator.

For the current main memory, 1.96 GB, interfaces should have
64 buffers.

The following interfaces have a different number of buffers:

Network Current Recommended
Interface Buffer Count Buffer Count
eth0 n/a 64
eth1 n/a 64

 

To find out if there are problems with the affected inter-
faces, check the output of the “‘ifconfig” command for errors
and dropped packets.

Use the “lsqeth” command to confirm the current setting for
the number of buffers. In the default command output, the
buffer count is shown as the value for the “buffer_count” at-
tribute. With the -p option, the output is in table format
and the buffer count is shown in the “cnt” column.

SOLUTION
For each affected interface, change the number of buffers to
64.

To temporarily change the number of buffers on a running Lin-
ux instance, run a command of this form:

Offline the interface (Before offline make sure you are not
running any critical task using this interface)

# echo 0 > /sys/devices/qeth/<device_bus_id>/online

Change the buffer count

# echo 64 > /sys/devices/qeth/<device_bus_id>/buffer_count

Online the interface

# echo 1 > /sys/devices/qeth/<device_bus_id>/online

where <device_bus_id> is the bus ID of the qeth group device
that corresponds to the interface. In the “lsqeth” output,
this is the first of the three listed bus IDs.

How to make this setting persistent across reboots depends on
your distribution. Some distributions set the number through
scripts located below /etc/sysconfig, other distributions use
udev rules. For details, see the documentation that is pro-
vided with your distribution.

The suggested buffer size is derived from a general best-
practice rule that is expressed by the “recommended_buffer-
count” check parameter, and that works well in many setups.
If your current settings work to your satisfaction and you do
not want to change them, you can adapt the “recommend-
ed_buffercount” parameter to your needs or omit this check to
suppress further warnings in the future.

REFERENCE
For more information, see the section about inbound buffers
in the qeth chapter of “Device Drivers, Features, and Com-
mands”. You can obtain this publication from

http://www.ibm.com/developerworks/linux/linux390/documentation_dev.html

ras_dump_on_panic …………………. lnxtest EXCEPTION-HIGH

>EXCEPTION ras_dump_on_panic.no_standalone(high)

SUMMARY
The dump-on-panic function is not enabled

EXPLANATION
Your Linux instance is not configured for dump-on-panic.

Configure dump-on-panic to automatically create a dump if a
kernel panic

occurs.

SOLUTION
To configure dump-on-panic, complete these steps:

1. Plan and prepare your dump device.
2. Edit /etc/sysconfig/dumpconf and configure the dump-on-
panic action.
Possible actions are dump, dump_reipl, or vmcmd with a CP
VMDUMP command.
3. Activate the dumpconf service with chkconfig and then
start the service.

REFERENCE
See the dumpconf man page.

For more information about the dump tools available for Linux
on System z,

see “Using the Dump Tools”.

You can obtain this publication from

http://www.ibm.com/developerworks/linux/linux390/documentation_dev.html

sec_non_root_uid_zero ……………… lnxtest SUCCESS
sec_services_insecure ……………… lnxtest SUCCESS
storage_invalid_multipath ………….. lnxtest SUCCESS
sys_sysctl_call_home ………………. lnxtest NOT APPLICABLE
sys_sysctl_panic ………………….. lnxtest SUCCESS
sys_sysinfo_cpu_cap ……………….. lnxtest NOT APPLICABLE
sys_tty_console_getty ……………… lnxtest SUCCESS
sys_tty_usage …………………….. lnxtest EXCEPTION-MED

>EXCEPTION sys_tty_usage.unused_ttys(medium)

SUMMARY
These terminals are unused: /dev/ttyS, /dev/ttysclp

EXPLANATION
There are one or more unused terminal devices. Terminal de-
vices

are intended to provide a user interface to a Linux instance.

Without an associated program, a terminal device does not
serve

this purpose.

These terminal devices are unused:

/dev/ttyS[2-3]
/dev/ttysclp0

 

To confirm that no program is configured for a terminal de-
vice,

issue “ps -ef |grep <terminal>”. Where <terminal> specifies
the

terminal device node without the leading /dev/.

SOLUTION
Configure a getty program for each unused terminal. Depend-
ing on

your distribution, you might have to create an inittab entry
or an

Upstart job. For details, see the documentation that is pro-
vided

with your distribution.

If you want to accept unused terminals, add them to the “ex-
clude_tty”

check parameter to suppress this warning in the future.

REFERENCE
For general information about terminals, see

“Device Drivers, Features, and Commands”.

You can obtain this publication from:

http://www.ibm.com/developerworks/linux/linux390/documentation_dev.html

For more specific information, see the documentation that is
provided

with your distribution. Also see the man page of the “ps”
command.
Check results: Exceptions: Run-time:
SUCCESS……..: 16 High………: 1 Min per check.: 0.006s
EXCEPTION……: 4 Medium…….: 3 Max per check.: 0.044s
NOT APPLICABLE.: 4 Low……….: 0 Avg per check.: 0.022s
FAILED SYSINFO.: 0 Total……..: 4 Total………: 1.297s
FAILED CHKPROG.: 0
Total……….: 24

Linux Health Checker 1.0

Linux Health Checker라는 툴이 있네요. 위의 그림과 같이 시스템의 상태에 대해서, 그리고 설정된 값에 대해서 진단해주는 툴입니다. 미리 정해놓은 값에 따라 시스템을 확인해서 경고를 하는 모니터링과 달리, 그러한 상황(경고발생)이 오기 전에 베스트프렉티스에 따라 설정된 값에 대한 조언을 해준다고 할까요? 암튼 그런 툴이라고 합니다. 뭐 실 상황에서 좀 써봐야 감이 더 오긴 하겠지만… ^^

아래 사용자 가이드가 있습니다. 한번 훓어보시는 것도 좋을 듯 합니다.

IBM Information Center: Linux Health Checker 1.0 User’s Guide

Linux Health Checker는 SourceForge.net에서 다운로드 받을 수 있습니다.

SourceForget.net: LNXHC – Linux Health Checker

Linux on System z에 Oracle 11gR2 RAC 설치하기

엔터프라이즈 리눅스 서버(Linux on System z) 환경에서 Oracle 11gR2 RAC를 설치하고 구성하는 방법에 대한 문서가 지난 해 말에 IBM에서 작성했네요. 화면 캡춰까지 떠놔서 따라해보면 기본 구성은 쉽게 해볼 수 있을 거 같네요.

이거 2-3분 신청받아서 설치하고 테스트해보실 수 있도록 하면 좋겠어요.

Linux on System z에 Oracle 11gR2 RAC 설치하기
-> 직접 다운로드 받기: Installing Oracle 11gR2 RAC on Linux on System z

Oracle Real Application Clusters on Linux on IBM System z

Linux on System z에 ORACLE RAC를 구성하고, 네트워크 성능 튜닝에 관련된 내용을 담고 있는 문서가 나왔네요. 두 개의 LPAR에 각각 오라클 데이터베이스를 구축하고 HiperSockets, 공유 디스크 구성을 통해 RAC 구성을 한 후에 System x 서버를 이용하여 부하를 쏘고 테스트한 결과를 담고 있습니다. 내용에 보면 파일시스템으로 OCFS2가 ASM으로 구성할 때보다 성능 면에서는 조금 나은 것 같은데 CPU를 많이 쓰네요.

자세한 내용은 문서를 참고하세요~ 🙂

URL 직접가기
Oracle Real Application Clusters on Linux on IBM System z: Set up and network performance tuning

Recovery of LVM DASD error on zlinux ipl- Redhat

root volume이 LVM인데 DASD를 동적으로 추가하고 실수로 fstab, modprobe.conf, zipl, mkintrd를 수행하지않고 IPL하면 DASD error가 발생하며 아래와 같이 booting이 되지 않습니다.  이 때 recovery하는 방법입니다.

=====================================================

18:39:04 Scanning logical volumes
18:39:04   Reading all physical volumes.  This may take a while…
18:39:05   Couldn’t find device with uuid
‘Os3guQ-6mwh-w2JE-LM3n-3Nwy-QaGp-U6E4cd’.
18:39:05   Couldn’t find all physical volumes for volume group
VolGroup00.
18:39:05   Couldn’t find device with uuid
‘Os3guQ-6mwh-w2JE-LM3n-3Nwy-QaGp-U6E4cd’.
18:39:05   Couldn’t find all physical volumes for volume group
VolGroup00.
18:39:05   Couldn’t find device with uuid
‘Os3guQ-6mwh-w2JE-LM3n-3Nwy-QaGp-U6E4cd’.
18:39:05   Couldn’t find all physical volumes for volume group
VolGroup00.
======================================================

* Recovery방법

1. CMS로 로긴하여 설치시 사용했던 PARM file과 CONF file을 수정하여 Rescue mode로 IPL.

* PARM file – rescue추가

root=/dev/ram0 ro ip=off ramdisk_size=40000 rescue
CMSDASD=191  CMSCONFFILE=rhel5.conf

* CONF file – DASD Address변경

HOSTNAME=”FOOBAR.SYSTEMZ.EXAMPLE.COM”
DASD=”150-159

 

2. IPL후 설치 때와 동일한 정보를 넣고 rescue mode로 로그인 후 fstab, modprobe.conf  file에 정확한 DASD 정보로 수정 후,

다음 작업(Bold체) 수행

The recovery console log seems to suggest it should have worked though:

login as: root
Welcome to the Red Hat Linux install environment 1.1 for zSeries

Running anaconda, the CentOS 4.5 rescue mode – please wait…

Your system is mounted under the /mnt/sysimage directory.
When finished please exit from the shell and your system will reboot.

-/bin/sh-3.00# vgscan

Reading all physical volumes.  This may take a while…
Found volume group “VGztpf” using metadata type lvm2
Found volume group “VolGroup00” using metadata type lvm2
-/bin/sh-3.00# vgchange -a y
1 logical volume(s) in volume group “VGztpf” now active
2 logical volume(s) in volume group “VolGroup00” now active

-/bin/sh-3.00# pvdisplay

— Physical volume —
PV Name               /dev/dasdd1
VG Name               VGztpf
PV Size               6.88 GB / not usable 3.11 MB
Allocatable           yes (but full)
PE Size (KByte)       4096
Total PE              1760
Free PE               0
Allocated PE          1760
PV UUID               27fl0g-kjxZ-VTij-eVvc-kfSM-MngN-0zrYNF

— Physical volume —
PV Name               /dev/dasdb1
VG Name               VolGroup00
PV Size               2.29 GB / not usable 11.64 MB
Allocatable           yes (but full)
PE Size (KByte)       32768
Total PE              73
Free PE               0
Allocated PE          73
PV UUID               CYsk0I-XLWq-Kv9d-WtiJ-FEgu-b2IA-nY3aKi

 

-/bin/sh-3.00# lvdisplay

— Logical volume —
LV Name                /dev/VGztpf/LVztpf
VG Name                VGztpf
LV UUID                eENhVR-eSRV-pFi6-GU2K-Gph8-h1es-tI2MyN
LV Write Access        read/write
LV Status              available
# open                 0
LV Size                6.88 GB
Current LE             1760
Segments               1
Allocation             inherit
Read ahead sectors     0
Block device           253:0

— Logical volume —
LV Name                /dev/VolGroup00/LogVol00
VG Name                VolGroup00
LV UUID                9u3PP2-LwdK-XA58-rQPO-3kLs-K73H-NSIEDj
LV Write Access        read/write
LV Status              available
# open                 1
LV Size                10.84 GB
Current LE             347
Segments               4
Allocation             inherit
Read ahead sectors     0
Block device           253:1

 

-/bin/sh-3.00# mkdir /mnt/sysimage2
-/bin/sh-3.00# mount /dev/VolGroup00/LogVol00 /mnt/sysimage2
-/bin/sh-3.00# mount /dev/dasda1 /mnt/sysimage2/boot
-/bin/sh-3.00# mount /dev/VGztpf/LVztpf /mnt/sysimage2/ztpf
-/bin/sh-3.00# chroot /mnt/sysimage2 /bin/bash
bash-3.00# /sbin/mkinitrd -v -f /boot/initrd-`uname -r`.img `uname -r`

Creating initramfs
Looking for deps of module ide-disk
Looking for deps of module ext2
Looking for deps of module dm-mod
Looking for deps of module dm-mirror     dm-mod
Looking for deps of module dm-mod
Looking for deps of module dm-zero       dm-mod
Looking for deps of module dm-mod
Looking for deps of module dm-snapshot   dm-mod
Looking for deps of module dm-mod
Using modules:  ./kernel/drivers/md/dm-mod.ko ./kernel/drivers/md/dm-mirror.ko
./kernel/drivers/md/dm-zero.ko ./kernel/drivers/md/dm-

snapshot.ko
/sbin/nash -> /tmp/initrd.qAk813/bin/nash
/sbin/insmod.static -> /tmp/initrd.qAk813/bin/insmod
/sbin/udev.static -> /tmp/initrd.qAk813/sbin/udev
/etc/udev/udev.conf -> /tmp/initrd.qAk813/etc/udev/udev.conf
copy from /lib/modules/2.6.9-55.EL/./kernel/drivers/md/dm-mod.ko(elf64-s390)
to /tmp/initrd.qAk813/lib/dm-mod.ko(elf64-s390)
copy from
/lib/modules/2.6.9-55.EL/./kernel/drivers/md/dm-mirror.ko(elf64-s390) to
/tmp/initrd.qAk813/lib/dm-mirror.ko(elf64-s390)
copy from /lib/modules/2.6.9-55.EL/./kernel/drivers/md/dm-zero.ko(elf64-s390)
to /tmp/initrd.qAk813/lib/dm-zero.ko(elf64-s390)
copy from
/lib/modules/2.6.9-55.EL/./kernel/drivers/md/dm-snapshot.ko(elf64-s390) to
/tmp/initrd.qAk813/lib/dm-snapshot.ko(elf64-s390)
/sbin/lvm.static -> /tmp/initrd.qAk813/bin/lvm
/etc/lvm -> /tmp/initrd.qAk813/etc/lvm
`/etc/lvm/lvm.conf’ -> `/tmp/initrd.qAk813/etc/lvm/lvm.conf’
Loading module dm-mod
Loading module dm-mirror
Loading module dm-zero
Loading module dm-snapshot

bash-3.00# /sbin/zipl -V

Using config file ‘/etc/zipl.conf’
Target device information
Device……………………..: 5e:00
Partition…………………..: 5e:01
DASD device number…………..: 0120
Type……………………….: disk partition
Disk layout…………………: ECKD/compatible disk layout
Geometry – heads…………….: 15
Geometry – sectors…………..: 12
Geometry – cylinders…………: 3339
Geometry – start…………….: 24
File system block size……….: 4096
Physical block size………….: 4096
Device size in physical blocks..: 25596
Building bootmap ‘/boot//bootmap’
Building menu ‘rh-automatic-menu’
Adding #1: IPL section ‘linux’ (default)
kernel image……: /boot/vmlinuz-2.6.9-55.EL at 0x10000
kernel parmline…: ‘root=/dev/VolGroup00/LogVol00’ at 0x1000
initial ramdisk…: /boot/initrd-2.6.9-55.EL.img at 0x800000
Preparing boot device: 0120.
Preparing boot menu
Interactive prompt……: enabled
Menu timeout…………: 15 seconds
Defau
lt configuration…: ‘linux’
Syncing disks…
Done.
bash-3.00#