gpfs.base - GPFS File Manager
gpfs.msg.en_US - GPFS Server Messages - U.S. English
gpfs.docs.data - GPFS Server Manpages and Documentation
GPFS commands are in a separate directory.
export PATH=$PATH:/usr/lpp/mmfs/bin# ps -ef | grep mmfs# mmlscluster# mmlsconfigPorts 22 (ssh) and 1191 (GPFS) must be open between cluster nodes. SSH can be configured instead of the default rsh via mmchcluster.
# tail -f /var/adm/ras/mmfs.log.latestYou can consider using the mmrpldisk command for replacing a LUN with another of identical size. See Replacing disks in a GPFS file system
Dump/Save current configuration
# gpfs.snapList active mounts
# mmlsmount allFile system foobar is mounted on 2 nodes.List storage pools
# mmlsfs all -PFile system attributes for /dev/foobar:=====================================flag value description---- ---------------- ----------------------------------------------------- -P system Disk storage pools in file systemList disks in each filesystem
# mmlsfs all -dFile system attributes for /dev/foobar:=====================================flag value description---- ---------------- ----------------------------------------------------- -d mycluster00nsd Disks in file systemList current NSDs (network shared disks)
# mmlsnsd -M Disk name NSD volume ID Device Node name Remarks--------------------------------------------------------------------------------------- mycluster00nsd 0AEC13994BFCEEF7 /dev/hdisk7 host1.mydomain.com mycluster00nsd 0AEC13994BFCEEF7 /dev/hdisk7 host1.mydomain.com mycluster00nsd 0AEC13994BFCEEF7 - host3.mydomain.com (not found) directly attachedmmlsnsd: 6027-1370 The following nodes could not be reached:host3.mydomain.comList filesystem manager node(s)
# mmlsmgrfile system manager node---------------- ------------------foobar 10.111.11.111 (host2)Cluster manager node: 10.111.11.111 (host2)Show the state of GPFS daemons
- on the local node:
# mmgetstate Node number Node name GPFS state------------------------------------------ 2 myhost2 active- on all cluster members:
# mmgetstate -a Node number Node name GPFS state------------------------------------------ 1 myhost1 active 2 myhost2 active 3 somexx1 unknownConfigure new device
# lspv# cfgmgrVerify new disk
# lspv# lspath -l hdiskX# errpt | headEdit desc file
The contents of the file is similar to this before GPFS v3.5:
#DiskName:PrimaryServer:BackupServer:DiskUsage:FailureGroup:DesiredName:StoragePoolhdisk7:::dataAndMetadata:-1:foobar00nsd:systemv3.5 utilities will accept the above format, but converting to the the new stanza format is highly recommended.
Configure new disk(s) as NSD
# mmcrnsd -F /path/to/hdisk7.descCheck NSDs
# mmlsnsd; mmlsnsd -FAdd disk to FS using the transformed desc file (mmcrnsd comments out the previous line and inserts a new one)
# mmadddisk foobar -F hdisk7.descThink about mmrestripefs
Task: Migrate data by deleting disks from GPFS
Example: 4x 100GB DS8100 => 1x 400GB XIV, 2x 4Gb adapters, net 250GB data = 27 minutes on an idle system
WARNING: high I/O!
# mmdeldisk foobar "gpfsXnsd;gpfsYnsd..." -rReconfigure tiebreakers
WARNING: GPFS must be shut down on all nodes!
# mmshutdown -a# mmchconfig tiebreakerDisks="foobar00nsd"# mmstartup -aRemove NSD and physical LUN
# mmdelnsd -p [NSD volume ID]Record old LUNs for the storage administrator (should they be removed from the zoning)
# pcmpath query essmap ...Delete AIX devices
# rmdev -dl hdiskXWhere fs is the GPFS filesystem device, as seen in mmlsfs, for example, fs0:
Check for processes using the filesystem
# fuser -cux /mount/pointUmount filesystem on all cluster nodes
# mmumount fs0 -aDelete filesystem
WARNING: Data is destroyed permanently!
# mmdelfs fs0(Optional) Remove NSD and disk device - see above