Install new LPAR AIX using NPIV and Storage V7000

En resumen:

 

1. Crear una LPAR AIX.

2. Definir en los VIOS  los Virtual Fibre Channel Adapter servidores.

3. En los VIOS identificar el nuevo vfchost.

4. Crear los Virtual Fibre Channel Adapter clientes

5. Guardar la nueva configuración de los VIOS.

6. Mapear los vfchost a las tarjetas físicas.

7. Identificar las WWPNs da la LPAR creada.

8. Configurar NIM para una instalación de AIX.

9. Iniciar la LPAR por primera vez.

10. Verificar estado LOGGED_IN de los vfchost.

11. Crear la zona en los S/W SAN.

12. Reiniciar la LPAR.

13. Definir los discos.

14. Comienza la instalación.

 

En detalle:

 

1. En la HMC cree una LPAR AIX como siempre. En LHEA seleccione el T4 de la tarjeta IVE, seleccione un puerto libre (que no esté usado por otra LPAR).

HMC

> Systems Management

> Servers

> POWER Server

> Configuration

> Create Logical Partition

> AIX or Linux

> Completar pasos del Wizard

 

Nota: Es aconsejable tener establecido de antemano los recursos que se usaran para crear la LPAR (CPU, Memoria, Tarjetas, etc.).

2. En la HMC vaya al vionodo1h, luego a Dynamic Logial Partitioning y cree un adaptador virtual Canal de Fibra (Virtual Fibre Channel Adapter), con un identificador no usado.

 

HMC

> vinodo1h

> Dynamic Logial Partitioning

> Virtual Adapter

> Actions

> Create Virtual Adapter

> Fibre Channel Adapter

vionodo1h: ID 4 para nimsuma con ID cliente 3

vionodo2h: ID 4 para nimsuma con ID cliente 4

3. En los VIOS ejecute cfgdev. Si ejecuta lsdev antes y después, debe aparecer un nuevo dispositivo vfchost.

 

vionod1h:

$ lsdev | grep ^vfc

vfchost0         Available   Virtual FC Server Adapter

vfchost1         Available   Virtual FC Server Adapter

vfchost2         Available   Virtual FC Server Adapter

vfchost3         Available   Virtual FC Server Adapter

vfchost4         Available   Virtual FC Server Adapter

vfchost6         Available   Virtual FC Server Adapter

vfchost7         Available   Virtual FC Server Adapter

vfchost8         Available   Virtual FC Server Adapter

$ cfgdev

$ lsdev | grep ^vfc

vfchost0         Available   Virtual FC Server Adapter

vfchost1         Available   Virtual FC Server Adapter

vfchost2         Available   Virtual FC Server Adapter

vfchost3         Available   Virtual FC Server Adapter

vfchost4         Available   Virtual FC Server Adapter

vfchost5         Available   Virtual FC Server Adapter <--

vfchost6         Available   Virtual FC Server Adapter

vfchost7         Available   Virtual FC Server Adapter

vfchost8         Available   Virtual FC Server Adapter

vionodo2h:

$ lsdev | grep ^vfc

vfchost0         Available   Virtual FC Server Adapter

vfchost1         Available   Virtual FC Server Adapter

vfchost3         Available   Virtual FC Server Adapter

vfchost4         Available   Virtual FC Server Adapter

vfchost6         Available   Virtual FC Server Adapter

vfchost7         Available   Virtual FC Server Adapter

vfchost9         Available   Virtual FC Server Adapter

$ cfgdev

$ lsdev | grep ^vfc

vfchost0         Available   Virtual FC Server Adapter

vfchost1         Available   Virtual FC Server Adapter

vfchost2         Available   Virtual FC Server Adapter <--

vfchost3         Available   Virtual FC Server Adapter

vfchost4         Available   Virtual FC Server Adapter

vfchost6         Available   Virtual FC Server Adapter

vfchost7         Available   Virtual FC Server Adapter

vfchost9         Available   Virtual FC Server Adapter

4. En el profile de la LPAR creada en el paso 1 cree un nuevo adaptador virtual Canal de Fibra y genere uno con el identificador y VIOS de la parte 2. Marque que es necesario para que la LPAR inicie.

 

HMC

> nimsuma

> Configuration

> Manage Profiles

> Default

> Virtual Adapters

> Actions

> Create Virtual Adapter

> Fiber Channel Adapter

 

Adaptador #3, VIO vionodo1h ID adaptador 4.

Adaptador #4, VIO vionodo2h ID adaptador 4.

> OK

> Close

5. Guarde la configuración del perfil actual del VIO server como Default para que se mantenga en caso de reinicios de algún VIOS.

HMC

> vionodo1h

> Configuration

> Save Courrent Configuration

> Aceptar

HMC

> vionodo2h

> Configuration

> Save Courrent Configuration

> Aceptar

6. Mapee en los VIOS con el comando vfcmap, alternando fcs0 y fcs1.

 

vionodo1h:

$ vfcmap -vadapter vfchost5 -fcp fcs1

$ lsmap -npiv -vadapter vfchost5

Name          Physloc                            ClntID ClntName       ClntOS

------------- ---------------------------------- ------ -------------- -------

vfchost5      U8233.E8B.065864P-V1-C4                 3                

Status:NOT_LOGGED_IN

FC name:fcs1                    FC loc code:U78A0.001.DNWK388-P1-C1-T2

Ports logged in:0

Flags:4<NOT_LOGGED>

VFC client name:                VFC client DRC:

vionodo2h:

$ vfcmap -vadapter vfchost2 -fcp fcs0  

$ lsmap -npiv -vadapter vfchost2

Name          Physloc                            ClntID ClntName       ClntOS

------------- ---------------------------------- ------ -------------- -------

vfchost2      U8233.E8B.065864P-V2-C4                 3                

Status:NOT_LOGGED_IN

FC name:fcs0                    FC loc code:U78A0.001.DNWK388-P1-C3-T1

Ports logged in:0

Flags:4<NOT_LOGGED>

VFC client name:                VFC client DRC:

7. Pasar a Storage las WWPNs correspondientes a la LPAR creada:

HMC

> Seleccionar la LAPR del paso 1

> Configuration

> Manage Profiles

> Seleccionar Default

> Virtual Adapters

> Client Fibre Channel de vionodo1h

> Actions

> Properties

> WWPNs: c050760376c40064

 

7.1 Igual procedimiento para adaptador que pasa para vionodo2h

 

8. Configurar NIM para una instalación de AIX.

# vi /etc/hosts --> agregar el nombre la LPAR de paso 1 y una IP disponible.

10.1.4.253       nimsuma

 

# smitty nim

> Perform NIM Administration Tasks

> Manage Machines

> Define a Machine

 

# smitty nim

> Perform NIM Administration Tasks

> Define a Resource

> spot

 

# smitty nim_bosinst

> Select the client we previously defined.

> Select Installation Type: “spot - Install a SPOT copy”

> Select the SPOT to use for the installation    

> Set “ACCEPT new license agreements?” to yes

> Set “Initiate reboot and installation now?” to no

 

9. Iniciar la LPAR en modo SMS y configurar la red con la misma IP que tiene en el /etc/hosts del NIM, para poder iniciar (bootear) desde él.

A. Select 2 for setup remote IPL.

B. Select 1 for first ethernet.

C. Select 1 for IPV4.

D. Select 1 for bootp.

E. Select 1 for IP parameters.

1.  client: 10.1.4.253

2.  server: 10.1.4.254

3.  Gateway: 10.1.4.11

4.  Subnet: 255.255.255.0

F. Hit ESC.

10. Ejecute en los VIOS el comando lsmap -npiv -vadapter vfchostX, para ver que el estado sea LOGGED_IN.

11. El área Storage deben modificar ahora la zona para integrar la nueva LPAR.

En la Fabric

Se crean los alias:

nimsuma_hba1

c0:50:76:03:76:c4:00:66

nimsuma_hba2

c0:50:76:03:76:c4:00:64

 

Se crean las zonas:

v7000_nimsuma_hba1

nimsuma_hba1; stg_v70001; stg_v70002; stg_v70003; stg_v70004

 

v7000_nimsuma_hba2

nimsuma_hba2; stg_v70001; stg_v70002; stg_v70003; stg_v70004

12. Reiniciar la LPAR.

13. En este momento “nos hacemos” visibles desde la caja de disco. El técnico de Storage procede a realizar las definiciones necesarias en la unidad de almacenamiento.

 

En el v7000

Se crea el host:

nimsuma 

c0:50:76:03:76:c4:00:66

c0:50:76:03:76:c4:00:64

Se crean los volúmenes:

nimsuma_rootvg

20 GB - NL_SAS

nimsuma_datos

50 GB - NL_SAS

14. Comienza la instalación.

 

A. Select 1 for boot device.

B. Select 6 for network.

C. Select 1 for bootp.

D. Select 1 for first ethernet.

E. Select 2 for normal mode boot.

F. Select 1 for yes I want to exit tftp should now start up.

G. After around 30,000 packets the console prompt should appear as follows:

 

Select 1 for English during install.

 

Notaciones:

La LPAR creada en este ejemplo la llamamos nimsuma. Para implementar NPIV en ambiente POWER es necesario el uso de VIO Server, en este caso nuestros VIOS se llamanvionodo1h y vionodo2h (redundantes). El Storage utilizado fue un IBM S. W. v7000, en los cuales se crearon dos LUNS, una para la instalación llamada nimsuma_rootvg y una para datos llamada nimsuma_datos.

 

PowerVM NPIV / IBM Switch Configuration

Environment

Minimum NPIV Requirements

 

Diagnosing the problem

You must meet the following requirements to set up and use NPIV.

1. Hardware

Any POWER6-based system or higher

Note: IBM intends to support N_Port ID Virtualization (NPIV) on the

POWER6 processor-based Power 595, BladeCenter JS12, and

BladeCenter JS22 in 2009

Install a minimum System Firmware level of EL340_039 for the IBM Power

520 and Power 550, and EM340_036 for the IBM Power 560 and IBM

Power 570

Minimum of one 8 Gigabit PCI Express Dual Port Fibre Channel Adapter

(Feature Code 5735)

Check the latest available firmware for the adapter at:

http://www.ibm.com/support/us/en

Select Power at the support type, then go to Firmware updates.

NPIV-enabled SAN switch

Only the first SAN switch which is attached to the Fibre Channel adapter in

the Virtual I/O Server needs to be NPIV-capable. Other switches in your

SAN environment do not need to be NPIV-capable.

2. Software

HMC V7.3.4, or later

Virtual I/O Server Version 2.1 with Fix Pack 20.1, or later

AIX 5.3 TL9, or later

AIX 6.1 TL2, or later

SDD 1.7.2.0 + PTF 1.7.2.2

SDDPCM 2.2.0.0 + PTF v2.2.0.6

SDDPCM 2.4.0.0 + PTF v2.4.0.1

Note: At the time of writing, only the 8 Gigabit PCI Express Dual Port

Fibre Channel Adapter (Feature Code 5735) was announced.

Note: Check, with the storage vendor, whether your SAN switch is

NPIV-enabled.

For information about IBM SAN switches, refer to Implementing an

IBM/Brocade SAN with 8 Gbps Directors and Switches, SG24-6116,

and search for NPIV.

Use the latest available firmware level for your SAN switch.

 

Resolving the problem

Configuring IBM NPIV and Switch for Virtualization

1. On the SAN switch, you must perform two tasks before it can be used for

NPIV.

a. Update the firmware to a minimum level of Fabric OS (FOS) 5.3.0. To

check the level of Fabric OS on the switch, log on to the switch and run the

version command, as shown in Example 2-20:

Example 2-20 version command shows Fabric OS level

itsosan02:admin> version

Kernel: 2.6.14

Fabric OS: v5.3.0

Made on: Thu Jun 14 19:04:02 2007

Flash: Mon Oct 20 12:14:10 2008

BootProm: 4.5.3

Note: You can find the firmware for IBM SAN switches at:

http://www-03.ibm.com/systems/storage/san/index.html

Click Support and select Storage are network (SAN) in the Product

family. Then select your SAN product.

b. After a successful firmware update, you must enable the NPIV capability

on each port of the SAN switch. Run the portCfgNPIVPort command to

enable NPIV on port 16:

itsosan02:admin> portCfgNPIVPort 16, 1

The portcfgshow command lists information for all ports, as shown in

Example 2-21.

Example 2-21 List port configuration

itsosan02:admin> portcfgshow

Ports of Slot 0 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

-----------------+--+--+--+--+----+--+--+--+----+--+--+--+----+--+--+--

Speed AN AN AN AN AN AN AN AN AN AN AN AN AN AN AN AN

Trunk Port ON ON ON ON ON ON ON ON ON ON ON ON ON ON ON ON

Long Distance .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..

VC Link Init .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..

Locked L_Port .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..

Locked G_Port .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..

Disabled E_Port .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..

ISL R_RDY Mode .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..

RSCN Suppressed .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..

Persistent Disable.. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..

NPIV capability .. ON ON ON ON ON ON ON ON ON .. .. .. ON ON ON

Ports of Slot 0 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31

-----------------+--+--+--+--+----+--+--+--+----+--+--+--+----+--+--+--

Speed AN AN AN AN AN AN AN AN AN AN AN AN AN AN AN AN

Trunk Port ON ON ON ON ON ON ON ON ON ON ON ON ON ON ON ON

Long Distance .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..

VC Link Init .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..

Locked L_Port .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..

Locked G_Port .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..

Disabled E_Port .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..

ISL R_RDY Mode .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..

RSCN Suppressed .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..

Persistent Disable.. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..

NPIV capability ON .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..

where AN:AutoNegotiate, ..:OFF, ??:INVALID,

SN:Software controlled AutoNegotiation.

Note: Refer to your SAN switch users guide for the command to enable

NPIV on your SAN switch.

2. Follow these steps to create the virtual Fibre Channel server adapter in the

Virtual I/O Server partition.

a. On the HMC, select the managed server to be configured:

Systems Management Servers → <servername>

b. Select the Virtual I/O Server partition on which the virtual Fibre Channel

server adapter is to be configured. Then select Tasks Dynamic

Logical Partitioning Virtual Adapters as shown in Figure 2-18.

c. To create a virtual Fibre Channel server adapter, select Actions

Create Fibre Channel Adapter... as shown in Figure 2-19.

d. Enter the virtual slot number for the Virtual Fibre Channel server adapter.

Then select the Client Partition to which the adapter may be assigned, and

enter the Client adapter ID as shown in Figure 2-20. Click Ok.

e. Click OK.

f. Remember to update the profile of the Virtual I/O Server partition so that

the change will be reflected across restarts of the partitions. As an

alternative, you may use the Configuration Save Current

Configuration option to save the changes to the new profile

3. Follow these steps to create virtual Fibre Channel client adapter in the virtual

I/O client partition.

a. Select the virtual I/O client partition on which the virtual Fibre Channel

client adapter is to be configured. Then select Tasks Configuration

Manage Profiles as shown in Figure 2-22.

b. To create a virtual Fibre Channel client adapter select the profile, select

Actions Edit. Then expand the Virtual Adapters tab and select

Actions Create Fibre Channel Adapter as shown in Figure 2-23.

c. Enter the virtual slot number for the Virtual Fibre Channel client adapter.

Then select the Virtual I/O Server partition to which the adapter may be

assigned and enter the Server adapter ID as shown in Figure 2-24. Click

OK.

d. Click OK OK Close.

4. Logon to the Virtual I/O Server partition as user padmin.

5. Run the cfgdev command to get the virtual Fibre Channel server adapter(s)

configured.

6. The command lsdev -dev vfchost* lists all available virtual Fibre Channel

server adapters in the Virtual I/O Server partition before mapping to a

physical adapter, as shown in Example 2-22.

Example 2-22 lsdev -dev vfchost* command on the Virtual I/O Server

$ lsdev -dev vfchost*

name status description

vfchost0 Available Virtual FC Server Adapter

7. The lsdev -dev fcs* command lists all available physical Fibre Channel

server adapters in the Virtual I/O Server partition, as shown in Example 2-23.

Example 2-23 lsdev -dev fcs* command on the Virtual I/O Server

$ lsdev -dev fcs*

name status description

fcs0 Available 4Gb FC PCI Express Adapter (df1000fe)

fcs1 Available 4Gb FC PCI Express Adapter (df1000fe)

fcs2 Available 8Gb PCI Express Dual Port FC Adapter (df1000f114108a03)

fcs3 Available 8Gb PCI Express Dual Port FC Adapter (df1000f114108a03)

8. Run the lsnports command to check the Fibre Channel adapter NPIV

readiness of the adapter and the SAN switch. Example 2-24 shows that the

fabric attribute for the physical Fibre Channel adapter in slot C6 is set to 1.

This means the adapter and the SAN switch is NPIV ready. If the value is

equal 0, then the adapter or SAN switch is not NPIV ready and you should

check the SAN switch configuration.

Example 2-24 lsnports command on the Virtual I/O Server

$ lsnports

name physloc fabric tports aports swwpns awwpns

fcs3 U789D.001.DQDYKYW-P1-C6-T2 1 64 63 2048 2046

9. Before mapping the virtual FC adapter to a physical adapter, get the vfchost

name of the virtual adapter you created and the fcs name for the FC adapter

from the previous lsdev commands output.

10.To map the virtual adapters vfchost0 to the physical Fibre Channel adapter

fcs3, use the vfcmap command as shown in Example 2-25.

Example 2-25 vfcmap command with vfchost2 and fcs3

$ vfcmap -vadapter vfchost0 -fcp fcs3

vfchost0 changed

11.To list the mappings use the lsmap -npiv -vadapter vfchost0 command, as

shown in Example 2-26.

Example 2-26 lsmap -npiv -vadapter vfchost0 command

$ lsmap -npiv -vadapter vfchost0

Name Physloc ClntID ClntName ClntOS

============= ================================== ====== ============== =======

vfchost0 U9117.MMA.101F170-V1-C31 3

Status:NOT_LOGGED_IN

FC name: FC loc code:

Ports logged in:0

Flags:1<NOT_MAPPED,NOT_CONNECTED>

VFC client name: VFC client DRC:

12.After you have created the virtual Fibre Channel server adapters in the Virtual

I/O server partition and in the virtual I/O client partition, you need to do the

correct zoning in the SAN switch. Follow the next steps:

a. Get the information about the WWPN of the virtual Fibre Channel client

adapter created in the virtual I/O client partition.

i. Select the appropriate virtual I/O client partition, then click Task

Properties. Expand Virtual Adapters tab, select the Client Fibre

Channel client adapter and then select Actions Properties to list

the properties of the virtual Fibre Channel client adapter, as shown in

Figure 2-25.

ii. Figure 2-26 shows the properties of the virtual Fibre Channel client

adapter. Here you can get the WWPN that is required for the zoning.

b. Logon to your SAN switch and create a new zoning, or customize an

existing one.

The command zoneshow, which is available on the IBM 2109-F32 switch,

lists the existing zones as shown in Example 2-27.

Example 2-27 The zoneshow command before adding a new WWPN

itsosan02:admin> zoneshow

Defined configuration:

cfg: npiv vios1; vios2

zone: vios1 20:32:00:a0:b8:11:a6:62; c0:50:76:00:0a:fe:00:18

zone: vios2 C0:50:76:00:0A:FE:00:12; 20:43:00:a0:b8:11:a6:62

Effective configuration:

cfg: npiv

zone: vios1 20:32:00:a0:b8:11:a6:62

c0:50:76:00:0a:fe:00:18

zone: vios2 c0:50:76:00:0a:fe:00:12

20:43:00:a0:b8:11:a6:62

To add the WWPN c0:50:76:00:0a:fe:00:14 to the zone named vios1,

execute the following command:

itsosan02:admin> zoneadd "vios1", "c0:50:76:00:0a:fe:00:14"

To save and enable the new zoning, execute the cfgsave and cfgenable

npiv commands, as shown in Example 2-28 on page 76.

Example 2-28 The cfgsave and cfgenable commands

itsosan02:admin> cfgsave

You are about to save the Defined zoning configuration. This

action will only save the changes on Defined configuration.

Any changes made on the Effective configuration will not

take effect until it is re-enabled.

Do you want to save Defined zoning configuration only? (yes, y, no, n): [no]

y

Updating flash ...

itsosan02:admin> cfgenable npiv

You are about to enable a new zoning configuration.

This action will replace the old zoning configuration with the

current configuration selected.

Do you want to enable 'npiv' configuration (yes, y, no, n): [no] y

zone config "npiv" is in effect

Updating flash ...

With the zoneshow command you can check whether the added WWPN is

active, as shown in Example 2-29.

Example 2-29 The zoneshow command after adding a new WWPN

itsosan02:admin> zoneshow

Defined configuration:

cfg: npiv vios1; vios2

zone: vios1 20:32:00:a0:b8:11:a6:62; c0:50:76:00:0a:fe:00:18;

c0:50:76:00:0a:fe:00:14

zone: vios2 C0:50:76:00:0A:FE:00:12; 20:43:00:a0:b8:11:a6:62

Effective configuration:

cfg: npiv

zone: vios1 20:32:00:a0:b8:11:a6:62

c0:50:76:00:0a:fe:00:18

c0:50:76:00:0a:fe:00:14

zone: vios2 c0:50:76:00:0a:fe:00:12

20:43:00:a0:b8:11:a6:62

c. After you have finished with the zoning, you need to map the LUN

device(s) to the WWPN. In our example the LUN named NPIV_AIX61 is

mapped to the Host Group named VIOS1_NPIV, as shown in Figure 2-27.

13.Activate your AIX client partition and boot it into SMS.

14.Select the correct boot devices within SMS, such as a DVD or a NIM Server.

15.Continue to boot the LPAR into the AIX Installation Main menu.

16.Select the disk where you want to install the operating system and continue to

install AIX.