RHEL mini HowTOs

Contenido:

Listar HBAs

$ lspci |grep -i hba

1c:00.0 Fibre Channel: QLogic Corp. ISP2432-based 4Gb Fibre Channel to PCI Express HBA (rev 03)

24:00.0 Fibre Channel: QLogic Corp. ISP2432-based 4Gb Fibre Channel to PCI Express HBA (rev 03)

ó

$ ls  /sys/class/fc_host/ho*

/sys/class/fc_host/host3:

device       issue_lip  port_id    port_state  speed       subsystem          symbolic_name    tgtid_bind_type

fabric_name  node_name  port_name  port_type   statistics  supported_classes  system_hostname  uevent

/sys/class/fc_host/host4:

device       issue_lip  port_id    port_state  speed       subsystem          symbolic_name    tgtid_bind_type

fabric_name  node_name  port_name  port_type   statistics  supported_classes  system_hostname  uevent

ó

$  systool -c fc_host

Class = "fc_host"

  Class Device = "host3"

    Device = "host3"

  Class Device = "host4"

    Device = "host4"

Nota: Si no tenemos systool:

$ sudo yum search sysfsutil

Loaded plugins: fastestmirror, versionlock

Loading mirror speeds from cached hostfile

 * base: centos.brnet.net.br

 * extras: centos.brnet.net.br

 * updates: centos.brnet.net.br

================================================= N/S Matched: sysfsutil =================================================

sysfsutils.x86_64 : Utilities for interfacing with sysfs

$ yum install sysfsutils.x86_64

Podemos, también, consultar a yum que paquete lo contiene contiene:

$ yum whatprovides '*/bin/systool'

Loaded plugins: fastestmirror

Loading mirror speeds from cached hostfile

* base: centos.brisanet.com.br

* extras: centos.brisanet.com.br

* updates: centos.brisanet.com.br

sysfsutils-2.1.0-16.el7.x86_64 : Utilities for interfacing with sysfs

Repo : base

Matched from:

Filename : /usr/bin/systool

sysfsutils-2.1.0-16.el7.x86_64 : Utilities for interfacing with sysfs

Repo : @base

Matched from:

Filename : /usr/bin/systool

. WWN de una HBA:

$ cat /sys/class/fc_host/host3/port_name

0x2100001b3280c585

ó

$ systool -c fc_host -v host4

Class = "fc_host"

  Class Device = "host4"

  Class Device path = "/sys/class/fc_host/host4"

    fabric_name         = "0x10270085f8d025"

    issue_lip           = <store method only>

    node_name           = "0x2000001b3280d482"

    port_id             = "0x142200"

    port_name           = "0x2100001b3280d482"

    port_state          = "Online"

    port_type           = "NPort (fabric via point-to-point)"

    speed               = "4 Gbit"

    supported_classes   = "Class 3"

    symbolic_name       = "QLE2460 FW:v4.03.02 DVR:v8.02.00-k5-rhel5.2-04"

    system_hostname     = ""

    tgtid_bind_type     = "wwpn (World Wide Port Name)"

    uevent              = <store method only>

    Device = "host4"

    Device path = "/sys/devices/pci0000:00/0000:00:02.0/0000:1a:00.0/0000:1b:01.0/0000:24:00.0/host4"

      fw_dump             =

      nvram               = "ISP "

      optrom_ctl          = <store method only>

      optrom              =

      sfp                 = ""

      uevent              = <store method only>

      vpd                 = "4"

Driver Multipath

Verficar que no es te instalado:

$ rpm -qa |grep multipath

$

Buscar en los repos:

# yum serach multipath

Loaded plugins: fastestmirror, versionlock

Loading mirror speeds from cached hostfile

 * base: centos.brnet.net.br

 * extras: centos.brnet.net.br

 * updates: centos.brnet.net.br

================================================= N/S Matched: multipath =================================================

device-mapper-multipath.x86_64 : Tools to manage multipath devices using device-mapper

device-mapper-multipath-libs.i686 : The device-mapper-multipath modules and shared library

device-mapper-multipath-libs.x86_64 : The device-mapper-multipath modules and shared library

  Name and summary matches only, use "search all" for everything.

Instalar:

# yum install multipath -y

Verificar:

$ rpm -qa |grep multipath

device-mapper-multipath-0.4.9-72.el6_5.3.x86_64

device-mapper-multipath-libs-0.4.9-72.el6_5.3.x86_64

. Configurar driver para que la conexión con un IBM V7000:

# vi /etc/multipath.conf

devices {

# SVC

        device {

                vendor                  "IBM"

                product                 "2145"

                path_grouping_policy    group_by_prio

                prio_callout            "/sbin/mpath_prio_alua /dev/%n"

        }

}

Nota: Estas lineas son provistas por el fabricante del storage.

. Configurar driver para que la conexión con un IBM ds3400/3500: No testeado!

device {

vendor "IBM"

product "1746*"

prio_callout "/sbin/mpath_prio_rdac /dev/%n"

path_grouping_policy group_by_prio

failback immediate

path_checker rdac

hardware_handler "1 rdac"

}

. Listar discos externos: SAN

#  ls -l /dev/mapper/

total 0

crw------- 1 root root  10,  63 Jun  3 12:27 control

brw-rw---- 1 root disk 253,   9 Jun  3 15:27 mpath1

brw-rw---- 1 root disk 253,  17 Jun  3 15:27 mpath10

brw-rw---- 1 root disk 253,  18 Jun  3 15:27 mpath11

brw-rw---- 1 root disk 253,  19 Jun  3 15:27 mpath12

brw-rw---- 1 root disk 253,  24 Jun  3 15:27 mpath13

 

ó

# multipath -ll

mpath1 (360050768028100502800000000000106) dm-16 IBM,2145

[size=50G][features=1 queue_if_no_path][hwhandler=0][rw]

\_ round-robin 0 [prio=50][active]

 \_ 3:0:4:8  sdbi 67:192  [active][ready]

 \_ 3:0:6:8  sdfg 130:32  [active][ready]

\_ round-robin 0 [prio=10][enabled]

 \_ 3:0:5:8  sddh 70:240  [active][ready]

 \_ 3:0:3:8  sdj  8:144   [active][ready]

....

. Listar discos locales:

$ ls -l /dev/sd*

brw-rw----. 1 root disk 8,  0 Sep  3 14:18 /dev/sda

brw-rw----. 1 root disk 8,  1 Sep  3 14:18 /dev/sda1

brw-rw----. 1 root disk 8,  2 Sep  3 14:18 /dev/sda2

brw-rw----. 1 root disk 8, 16 Sep  3 14:18 /dev/sdb

brw-rw----. 1 root disk 8, 17 Sep  3 14:18 /dev/sdb1

ó

$ lsblk

NAME                       MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT

sr0                         11:0    1 1024M  0 rom

sda                          8:0    0   30G  0 disk

+-sda1                       8:1    0  500M  0 part /boot

+-sda2                       8:2    0 29.5G  0 part

  +-vgroot-lvroot (dm-0)   253:0    0    2G  0 lvm  /

  +-vgroot-lvswap (dm-1)   253:1    0    4G  0 lvm  [SWAP]

  +-vgroot-lvopt (dm-3)    253:3    0    1G  0 lvm  /opt

  +-vgroot-lvtmp (dm-4)    253:4    0    2G  0 lvm  /tmp

  +-vgroot-lvhome (dm-5)   253:5    0  512M  0 lvm  /home

  +-vgroot-lvusr (dm-6)    253:6    0    4G  0 lvm  /usr

  +-vgroot-lvvar (dm-7)    253:7    0    3G  0 lvm  /var

sdb                          8:16   0   20G  0 disk

+-sdb1                       8:17   0   20G  0 part

  +-vgsiges-lvsoftware (dm-2) 253:2    0   20G  0 lvm  /software/

Si no está instalada la utilidad lsblk:

$ yum search util-linux

Loaded plugins: rhnplugin, security

This system is receiving updates from RHN Classic or RHN Satellite.

================================================== Matched: util-linux ===================================================

util-linux.x86_64 : A collection of basic system utilities.

Instalar:

$ yum install util-linux.x86_64

Administrar repos

/etc/yum.repos.d/ -->> Directorio de repositorios.

.repo             -->> Extensión de los archivos.

Campos:

Donde,

Ejemplo:

# vi /etc/yum.repos.d/example.repo

[examplerepo]

name=Example Repository

baseurl=http://mirror.cisp.com/CentOS/6/os/i386/

enabled=1

gpgcheck=1

gpgkey=http://mirror.cisp.com/CentOS/6/os/i386/RPM-GPG-KEY-CentOS-6

Ejemplo DVD:

# vi /etc/yum.repos.d/centosdvdiso.repo

[centosdvdiso]

name=CentOS DVD ISO

baseurl=file:///mnt

enabled=1

gpgcheck=1

gpgkey=file:///mnt/RPM-GPG-KEY-CentOS-6

Agregar PVs a un VG existente

1. Verificar tamaño del FS antes de  agradar:

df -h

Filesystem            Size  Used Avail Use% Mounted on

/dev/mapper/vgroot-lvroot

                      2.0G  371M  1.6G  20% /

tmpfs                 1.9G     0  1.9G   0% /dev/shm

/dev/sda1             485M   54M  406M  12% /boot

/dev/mapper/vgroot-lvhome

                      504M   39M  441M   8% /home

/dev/mapper/vgroot-lvopt

                     1008M   34M  924M   4% /opt

/dev/mapper/vgroot-lvtmp

                      2.0G   68M  1.9G   4% /tmp

/dev/mapper/vgroot-lvusr

                      4.0G  1.3G  2.6G  33% /usr

/dev/mapper/vgroot-lvvar

                      3.0G 1017M  1.9G  36% /var

/dev/mapper/vgdatos-lvdatos

                       20G  383M   19G   1% /datos

2. Reconocer nuevo disco:

2.1 Obtener la diferencia de discos antes del rescan:

$ ls -l /sys/class/block/sd* > blockdisk.antes

2.2 Rescan de nuevos discos:

#echo "- - -" > /sys/class/scsi_host/hostX/scan

2.3 Comparar diferencia:

$ ls -l /sys/class/block/sd* > blockdisk.despues

$ diff blockdisk.antes blockdisk.despues

3. Crear partición:

#fdisk /dev/sdc --> Rocardar que LVM en linux arma los pvs en base a la partición.

 n

 p

Enter

Enter

t

8e

w

4. Crear PV:

# pvcreate -tv /dev/sdc1

# pvcreate /dev/sdc1

Nota: -tv lo hace en modo test y con output de info.

5. Extender VG:

# vgextend vgdatos /dev/sdc1

5.1 Verifcar 

# lsblk

NAME                       MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT

sr0                         11:0    1 1024M  0 rom

sda                          8:0    0   30G  0 disk

├─sda1                       8:1    0  500M  0 part /boot

└─sda2                       8:2    0 29.5G  0 part

  ├─vgroot-lvroot (dm-0)   253:0    0    2G  0 lvm  /

  ├─vgroot-lvswap (dm-1)   253:1    0    4G  0 lvm  [SWAP]

  ├─vgroot-lvopt (dm-3)    253:3    0    1G  0 lvm  /opt

  ├─vgroot-lvtmp (dm-4)    253:4    0    2G  0 lvm  /tmp

  ├─vgroot-lvhome (dm-5)   253:5    0  512M  0 lvm  /home

  ├─vgroot-lvusr (dm-6)    253:6    0    4G  0 lvm  /usr

  └─vgroot-lvvar (dm-7)    253:7    0    3G  0 lvm  /var

sdb                          8:16   0   20G  0 disk

└─sdb1                       8:17   0   20G  0 part

  └─vgdatos-lvdatos (dm-2) 253:2    0   20G  0 lvm  /datos

sdc                          8:32   0   40G  0 disk

└─sdc1                       8:33   0   40G  0 part

6. Extender el logical volumen:

# lvextend -rl 100%VG /dev/mapper/vgdatos-lvdatos

7. Verificar aumento del tamaño: 

# df -h

Filesystem            Size  Used Avail Use% Mounted on

/dev/mapper/vgroot-lvroot

                      2.0G  371M  1.6G  20% /

tmpfs                 1.9G     0  1.9G   0% /dev/shm

/dev/sda1             485M   54M  406M  12% /boot

/dev/mapper/vgroot-lvhome

                      504M   39M  441M   8% /home

/dev/mapper/vgroot-lvopt

                     1008M   34M  924M   4% /opt

/dev/mapper/vgroot-lvtmp

                      2.0G   68M  1.9G   4% /tmp

/dev/mapper/vgroot-lvusr

                      4.0G  1.3G  2.6G  33% /usr

/dev/mapper/vgroot-lvvar

                      3.0G 1017M  1.9G  36% /var

/dev/mapper/vgdatos-lvdatos

                       60G  383M   56G   1% /datos

Howto VG create RHEL 7.1

Listar discos configurados:

mpathb (3600507680281005028000000000004ed) dm-1 IBM,2145

size=60G features='1 queue_if_no_path' hwhandler='0' wp=rw

|-+- policy='round-robin 0' prio=0 status=active

| |- 1:0:4:0 sdi 8:128 active undef running

| |- 0:0:5:0 sdf 8:80  active undef running

| |- 1:0:5:0 sdj 8:144 active undef running

| `- 0:0:4:0 sde 8:64  active undef running

`-+- policy='round-robin 0' prio=0 status=enabled

  |- 0:0:2:0 sdc 8:32  active undef running

  |- 1:0:2:0 sdg 8:96  active undef running

  |- 0:0:3:0 sdd 8:48  active undef running

  `- 1:0:3:0 sdh 8:112 active undef running

Listar PV existentes:

[root@localhost ~]# pvs

  PV                   VG       Fmt  Attr PSize  PFree

  /dev/mapper/mpathap3 VolGroup lvm2 a--  59.50g     0

Crear el PV con disco :

[root@localhost ~]# pvcreate /dev/mapper/mpathb

  Physical volume "/dev/mapper/mpathb" successfully created

  

Divisar nuevamente:  

[root@localhost ~]# pvs

  PV                   VG       Fmt  Attr PSize  PFree

  /dev/mapper/mpathap3 VolGroup lvm2 a--  59.50g     0

  /dev/mapper/mpathb            lvm2 a--  60.00g 60.00g

Crear VG en modo testing:

[root@localhost ~]# vgcreate -tv vgdatos /dev/mapper/mpathb

  TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated.

    Wiping cache of LVM-capable devices

    Wiping cache of LVM-capable devices

    Adding physical volume '/dev/mapper/mpathb' to volume group 'vgdatos'

    Test mode: Skipping archiving of volume group.

    Test mode: Skipping backup of volume group.

  Volume group "vgdatos" successfully created

    Test mode: Wiping internal cache

    Wiping internal VG cache

Nota: Esto "emula" la creación para ver si todo anda bien (-t).

Crear  VG efectivamente:

[root@localhost ~]# vgcreate -v vgdatos /dev/mapper/mpathb

    Wiping cache of LVM-capable devices

    Wiping cache of LVM-capable devices

    Adding physical volume '/dev/mapper/mpathb' to volume group 'vgdatos'

    Archiving volume group "vgdatos" metadata (seqno 0).

    Creating volume group backup "/etc/lvm/backup/vgdatos" (seqno 1).

  Volume group "vgdatos" successfully created

Listar el vg creado:

[root@localhost ~]# vgdisplay  |grep -A 10 datos

  VG Name               vgdatos

  System ID

  Format                lvm2

  Metadata Areas        1

  Metadata Sequence No  1

  VG Access             read/write

  VG Status             resizable

  MAX LV                0

  Cur LV                0

  Open LV               0

  Max PV                0

Creo un nuevo Logical Volume en modo testing:

[root@localhost ~]# lvcreate -tv -l 100%FREE -n lvdatos vgdatos

 TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated.

    Setting logging type to disk

    Finding volume group "vgdatos"

    Test mode: Skipping archiving of volume group.

    Creating logical volume lvdatos

    Test mode: Skipping backup of volume group.

    Test mode: Skipping activation and zeroing.

  Logical volume "lvdatos" created

    Test mode: Wiping internal cache

    Wiping internal VG cache

Creo un nuevo Logical Volume:

[root@localhost ~]# lvcreate -v -l 100%FREE -n lvdatos vgdatos

Verifico:

[root@localhost ~]# lvdisplay |grep -A 8 datos

  LV Path                /dev/vgdatos/lvdatos

  LV Name                lvdatos

  VG Name                vgdatos

  LV UUID                ht523y-6XM7-pU2J-fLnm-hEOF-R9QM-TCFU8c

  LV Write Access        read/write

  LV Creation host, time localhost.localdomain, 2015-05-22 12:34:19 -0300

  LV Status              available

  # open                 0

  LV Size                60.00 GiB

  Current LE             15359

  Segments               1

Crear File Systema del tipo ext3:  

[root@localhost ~]# mkfs.ext3 /dev/mapper/vgdatos-lvdatos

mke2fs 1.41.12 (17-May-2010)

Filesystem label=

OS type: Linux

Block size=4096 (log=2)

Fragment size=4096 (log=2)

Stride=0 blocks, Stripe width=0 blocks

3932160 inodes, 15727616 blocks

786380 blocks (5.00%) reserved for the super user

First data block=0

Maximum filesystem blocks=4294967296

480 block groups

32768 blocks per group, 32768 fragments per group

8192 inodes per group

Superblock backups stored on blocks:

        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,

        4096000, 7962624, 11239424

Writing inode tables: done

Creating journal (32768 blocks): done

Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 32 mounts or

180 days, whichever comes first.  Use tune2fs -c or -i to override.

Creación de punto de montaje y inicio atomático:

[root@localhost /]# cd /

[root@localhost /]# mkdir software

[root@localhost /]# mount /dev/mapper/vgdatos-lvdatos /datos

Listar el UUID para modidcat el fstab:

[root@localhost /]# blkid

 UUID="81c20b10-42e6-4708-8b9e-e983f92f9515"

 

[root@localhost /]# vi /etc/fstab

 UUID="81c20b10-42e6-4708-8b9e-e983f92f9515" /software  ext3 rw  0 0

[root@localhost /]# umount /srv/tomcat

[root@localhost /]# mount -a