How to install IBM GPFS / Spectrum Scale + HSM Space Management
2018., november 7.,
4182 szó, elolvasásához kb.:
23 perc kell
|
_flex
|
[How to guides, tips and tricks / útmutatók, tippek és trükkök]
#ibm #tivoli storage manager #tsm #spectrum proetect #sp #spectrum scale #gpfs #install #how to
Install IBM GPFS / Spectrum Scale
(on SUSE Linux 12sp3)
Linux dependencies
sles12sp3:~ # zypper install tar unzip mc gcc cpp make automake glibc glibc-devel gcc-c++ mksh kernel-default-devel
Linux host (1 node)
sles12sp3:~ # cat /etc/hosts 10.1.110.94 sles12sp3
sles12sp3:~ # ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/root/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is: SHA256:ICa3Zd1ckmjjjJbpNIM4Cngd6DX+bFaswjPxA2O8x3s root@sles12sp3 The key's randomart image is: +---[RSA 2048]----+ | . .... | | . + .+o.o | |...*+++O..o | |o =+X=O.= | |.o +.& =S | |. * % | | B o | | . E | | . | +----[SHA256]-----+
sles12sp3:~ # ssh-copy-id /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub" The authenticity of host 'sles12sp3 (10.1.110.94)' can't be established. ECDSA key fingerprint is SHA256:pM+gbUEMKfoZSQKAMe7vIzidltsb5TqdJYrYwHB6nKg. Are you sure you want to continue connecting (yes/no)? yes /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys Password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'sles12sp3'" and check to make sure that only the key(s) you wanted were added.
GPFS / Spectrum Scale
sles12sp3:~ # ./Spectrum_Scale_Protocols_Data_Management-5.0.1.0-x86_64-Linux-install Extracting License Acceptance Process Tool to /usr/lpp/mmfs/5.0.1.0 ... tail -n +618 ./Spectrum_Scale_Protocols_Data_Management-5.0.1.0-x86_64-Linux-install | tar -C /usr/lpp/mmfs/5.0.1.0 -xvz --exclude=installer --exclude=*_rpms --exclude=*rpm --exclude=*tgz --exclude=*deb 1> /dev/null Installing JRE ... If directory /usr/lpp/mmfs/5.0.1.0 has been created or was previously created during another extraction, .rpm, .deb, and repository related files in it (if there were) will be removed to avoid conflicts with the ones being extracted. tail -n +618 ./Spectrum_Scale_Protocols_Data_Management-5.0.1.0-x86_64-Linux-install | tar -C /usr/lpp/mmfs/5.0.1.0 --wildcards -xvz ibm-java*tgz 1> /dev/null tar -C /usr/lpp/mmfs/5.0.1.0/ -xzf /usr/lpp/mmfs/5.0.1.0/ibm-java*tgz Defaulting to --text-only mode. Invoking License Acceptance Process Tool ... /usr/lpp/mmfs/5.0.1.0/ibm-java-x86_64-71/jre/bin/java -cp /usr/lpp/mmfs/5.0.1.0/LAP_HOME/LAPApp.jar com.ibm.lex.lapapp.LAP -l /usr/lpp/mmfs/5.0.1.0/LA_HOME -m /usr/lpp/mmfs/5.0.1.0 -s /usr/lpp/mmfs/5.0.1.0 -text_only LICENSE INFORMATION The Programs listed below are licensed under the following License Information terms and conditions in addition to the Program license terms previously agreed to by Client and IBM. If Client does not have previously agreed to license terms in effect for the Program, the International Program License Agreement (Z125-3301-14) applies. Program Name (Program Number): IBM Spectrum Scale Data Management Edition V5.0.1 (5737-F34) IBM Spectrum Scale Data Management Edition V5.0.1 (5641-DM1) IBM Spectrum Scale Data Management Edition V5.0.1 (5641-DM3) IBM Spectrum Scale Data Management Edition V5.0.1 (5641-DM5) Press Enter to continue viewing the license agreement, or enter "1" to accept the agreement, "2" to decline it, "3" to print it, "4" to read non-IBM terms, or "99" to go back to the previous screen. 1 License Agreement Terms accepted. Extracting Product RPMs to /usr/lpp/mmfs/5.0.1.0 ... tail -n +618 ./Spectrum_Scale_Protocols_Data_Management-5.0.1.0-x86_64-Linux-install | tar -C /usr/lpp/mmfs/5.0.1.0 --wildcards -xvz installer gui ganesha_debs/ubuntu16 ganesha_rpms/rhel7 ganesha_rpms/sles12 gpfs_debs/ubuntu16 gpfs_rpms/rhel7 gpfs_rpms/sles12 object_debs/ubuntu16 object_rpms/rhel7 smb_debs/ubuntu16 smb_rpms/rhel7 smb_rpms/sles12 tools/repo zimon_debs/ubuntu16 zimon_rpms/rhel7 zimon_rpms/sles12 gpfs_debs gpfs_rpms manifest 1> /dev/null - installer - gui - ganesha_debs/ubuntu16 - ganesha_rpms/rhel7 - ganesha_rpms/sles12 - gpfs_debs/ubuntu16 - gpfs_rpms/rhel7 - gpfs_rpms/sles12 - object_debs/ubuntu16 - object_rpms/rhel7 - smb_debs/ubuntu16 - smb_rpms/rhel7 - smb_rpms/sles12 - tools/repo - zimon_debs/ubuntu16 - zimon_rpms/rhel7 - zimon_rpms/sles12 - gpfs_debs - gpfs_rpms - manifest Removing License Acceptance Process Tool from /usr/lpp/mmfs/5.0.1.0 ... rm -rf /usr/lpp/mmfs/5.0.1.0/LAP_HOME /usr/lpp/mmfs/5.0.1.0/LA_HOME Removing JRE from /usr/lpp/mmfs/5.0.1.0 ... rm -rf /usr/lpp/mmfs/5.0.1.0/ibm-java*tgz ================================================================== Product rpms successfully extracted to /usr/lpp/mmfs/5.0.1.0 Cluster installation and protocol deployment To install a cluster or deploy protocols with the Spectrum Scale Install Toolkit: /usr/lpp/mmfs/5.0.1.0/installer/spectrumscale -h To install a cluster manually: Use the gpfs rpms located within /usr/lpp/mmfs/5.0.1.0/gpfs_rpms To upgrade an existing cluster using the Spectrum Scale Install Toolkit: 1) Copy your old clusterdefinition.txt file to the new /usr/lpp/mmfs/5.0.1.0/installer/configuration/ location 2) Review and update the config: /usr/lpp/mmfs/5.0.1.0/installer/spectrumscale config update 3) (Optional) Update the toolkit to reflect the current cluster config: /usr/lpp/mmfs/5.0.1.0/installer/spectrumscale config populate -N node> 4) Run the upgrade: /usr/lpp/mmfs/5.0.1.0/installer/spectrumscale upgrade -h To add nodes to an existing cluster using the Spectrum Scale Install Toolkit: 1) Add nodes to the clusterdefinition.txt file: /usr/lpp/mmfs/5.0.1.0/installer/spectrumscale node add -h 2) Install GPFS on the new nodes: /usr/lpp/mmfs/5.0.1.0/installer/spectrumscale install -h 3) Deploy protocols on the new nodes: /usr/lpp/mmfs/5.0.1.0/installer/spectrumscale deploy -h To add NSDs or file systems to an existing cluster using the Spectrum Scale Install Toolkit: 1) Add nsds and/or filesystems with: /usr/lpp/mmfs/5.0.1.0/installer/spectrumscale nsd add -h 2) Install the NSDs: /usr/lpp/mmfs/5.0.1.0/installer/spectrumscale install -h 3) Deploy the new file system: /usr/lpp/mmfs/5.0.1.0/installer/spectrumscale deploy -h To update the toolkit to reflect the current cluster config examples: /usr/lpp/mmfs/5.0.1.0/installer/spectrumscale config populate -N node> 1) Manual updates outside of the install toolkit 2) Sync the current cluster state to the install toolkit prior to upgrade 3) Switching from a manually managed cluster to the install toolkit ================================================================================== To get up and running quickly, visit our wiki for an IBM Spectrum Scale Protocols Quick Overview: https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20%28GPFS%29/page/Protocols%20Quick%20Overview%20for%20IBM%20Spectrum%20Scale ===================================================================================
Manual GPFS / Spectrum Scale configuration
sles12sp3:~ # cd /usr/lpp/mmfs/5.0.1.0/gpfs_rpms/ sles12sp3:/usr/lpp/mmfs/5.0.1.0/gpfs_rpms # rpm -ivh gpfs.base-5.0.1-0.x86_64.rpm gpfs.docs-5.0.1-0.noarch.rpm gpfs.gskit-8.0.50-86.x86_64.rpm gpfs.ext-5.0.1-0.x86_64.rpm gpfs.gpl-5.0.1-0.noarch.rpm Preparing... ################################# [100%] Updating / installing... 1:gpfs.base-5.0.1-0 ################################# [ 25%] Created symlink from /etc/systemd/system/multi-user.target.wants/mmautoload.service to /usr/lib/systemd/system/mmautoload.service. 2:gpfs.ext-5.0.1-0 ################################# [ 50%] 3:gpfs.gskit-8.0.50-86 ################################# [ 75%] 4:gpfs.docs-5.0.1-0 ################################# [100%] 5:gpfs.gpl-5.0.1-0 ################################# [100%]
sles12sp3:~ # vim /root/.bash_profile
PATH=$PATH:$HOME/bin:/usr/lpp/mmfs/bin
export LC_ALL=C
sles12sp3:~ # mmbuildgpl -------------------------------------------------------- mmbuildgpl: Building GPL module begins at Tue Nov 6 11:47:24 CET 2018. -------------------------------------------------------- Verifying Kernel Header... kernel version = 40473005 (40473005000000, 4.4.73-5-default, 4.4.73-5) module include dir = /lib/modules/4.4.73-5-default/build/include module build dir = /lib/modules/4.4.73-5-default/build kernel source dir = /usr/src/linux-4.4.73-5/include Found valid kernel header file under /lib/modules/4.4.73-5-default/build/include Verifying Compiler... make is present at /usr/bin/make cpp is present at /usr/bin/cpp gcc is present at /usr/bin/gcc g++ is present at /usr/bin/g++ ld is present at /usr/bin/ld Verifying Additional System Headers... Verifying linux-glibc-devel is installed ... Command: /bin/rpm -q linux-glibc-devel The required package linux-glibc-devel is installed make World ... make InstallImages ... -------------------------------------------------------- mmbuildgpl: Building GPL module completed successfully at Tue Nov 6 11:47:42 CET 2018. -------------------------------------------------------- sles12sp3:~ # mmcrcluster -N sles12sp3:quorum -p sles12sp3 -r /usr/bin/ssh -R /usr/bin/scp -C gpfstest mmcrcluster: Performing preliminary node verification ... mmcrcluster: Processing quorum and other critical nodes ... mmcrcluster: Finalizing the cluster data structures ... mmcrcluster: Command successfully completed mmcrcluster: Warning: Not all nodes have proper GPFS license designations. Use the mmchlicense command to designate licenses as needed. mmcrcluster: mmsdrfs propagation completed. sles12sp3:~ # mmchlicense server --accept -N sles12sp3 The following nodes will be designated as possessing server licenses: sles12sp3 mmchlicense: Command successfully completed mmchlicense: mmsdrfs propagation completed. sles12sp3:~ # mmstartup -a Tue Nov 6 13:18:33 CET 2018: mmstartup: Starting GPFS ... sles12sp3:~ # mmgetstate -L Node number Node name Quorum Nodes up Total nodes GPFS state Remarks ------------------------------------------------------------------------------------- 1 sles12sp3 1 0 1 arbitrating quorum node . . . sles12sp3:~ # mmgetstate -L Node number Node name Quorum Nodes up Total nodes GPFS state Remarks ------------------------------------------------------------------------------------- 1 sles12sp3 1 1 1 active quorum node sles12sp3:~ # lsblk sles12sp3:~ # lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 16G 0 disk |-sda1 8:1 0 2G 0 part [SWAP] `-sda2 8:2 0 14G 0 part /var/tmp sdb 8:16 0 16G 0 disk sr0 11:0 1 3.6G 0 rom sles12sp3:~ # cat /root/disk.stanza
%nsd:
device=/dev/sdb
nsd=nsd1
servers=sles12sp3
usage=dataAndMetadata
failureGroup=1
sles12sp3:~ # mmcrnsd -F /root/disk.stanza mmcrnsd: Processing disk sdb mmcrnsd: mmsdrfs propagation completed. sles12sp3:~ # mmlsnsd -X Disk name NSD volume ID Device Devtype Node name Remarks --------------------------------------------------------------------------------------------------- nsd1 0A016E5E5BE18819 /dev/sdb generic sles12sp3 server node sles12sp3:~ # mkdir -p /gpfs sles12sp3:~ # mmcrfs gpfsfs -F /root/disk.stanza -T /gpfs -v yes -A yes The following disks of gpfsfs will be formatted on node sles12sp3: nsd1: size 16384 MB Formatting file system ... Disks up to size 189 GB can be added to storage pool system. Creating Inode File 83 % complete on Tue Nov 6 13:26:15 2018 100 % complete on Tue Nov 6 13:26:16 2018 Creating Allocation Maps Creating Log Files 3 % complete on Tue Nov 6 13:26:23 2018 28 % complete on Tue Nov 6 13:26:28 2018 56 % complete on Tue Nov 6 13:26:33 2018 93 % complete on Tue Nov 6 13:26:38 2018 100 % complete on Tue Nov 6 13:26:38 2018 Clearing Inode Allocation Map Clearing Block Allocation Map Formatting Allocation Map for storage pool system Completed creation of file system /dev/gpfsfs. mmcrfs: mmsdrfs propagation completed. sles12sp3:~ # mmmount all -a Tue Nov 6 13:27:04 CET 2018: mmmount: Mounting file systems ... sles12sp3:~ # mmlsfs all File system attributes for /dev/gpfsfs: ======================================= flag value description ------------------- ------------------------ ----------------------------------- -f 8192 Minimum fragment (subblock) size in bytes -i 4096 Inode size in bytes -I 32768 Indirect block size in bytes -m 1 Default number of metadata replicas -M 2 Maximum number of metadata replicas -r 1 Default number of data replicas -R 2 Maximum number of data replicas -j cluster Block allocation type -D nfs4 File locking semantics in effect -k all ACL semantics in effect -n 32 Estimated number of nodes that will mount file system -B 4194304 Block size -Q none Quotas accounting enabled none Quotas enforced none Default quotas enabled --perfileset-quota no Per-fileset quota enforcement --filesetdf no Fileset df enabled? -V 19.01 (5.0.1.0) File system version --create-time Tue Nov 6 13:26:12 2018 File system creation time -z no Is DMAPI enabled? -L 33554432 Logfile size -E yes Exact mtime mount option -S relatime Suppress atime mount option -K whenpossible Strict replica allocation option --fastea yes Fast external attributes enabled? --encryption no Encryption enabled? --inode-limit 66560 Maximum number of inodes --log-replicas 0 Number of log replicas --is4KAligned yes is4KAligned? --rapid-repair yes rapidRepair enabled? --write-cache-threshold 0 HAWC Threshold (max 65536) --subblocks-per-full-block 512 Number of subblocks per full block -P system Disk storage pools in file system --file-audit-log no File Audit Logging enabled? --maintenance-mode no Maintenance Mode enabled? -d nsd1 Disks in file system -A yes Automatic mount option -o none Additional mount options -T /gpfs Default mount point --mount-priority 0 Mount priority sles12sp3:~ # mmlsmount all -L File system gpfsfs is mounted on 1 nodes: 10.1.110.94 sles12sp3 sles12sp3:~ # mmdf gpfsfs disk disk size failure holds holds free in KB free in KB name in KB group metadata data in full blocks in fragments --------------- ------------- -------- -------- ----- -------------------- ------------------- Disks in storage pool: system (Maximum disk size allowed is 189 GB) nsd1 16777216 1 yes yes 15364096 ( 92%) 11896 ( 0%) ------------- -------------------- ------------------- (pool total) 16777216 15364096 ( 92%) 11896 ( 0%) ============= ==================== =================== (total) 16777216 15364096 ( 92%) 11896 ( 0%) Inode Information ----------------- Number of used inodes: 4038 Number of free inodes: 62522 Number of allocated inodes: 66560 Maximum number of inodes: 66560
Install IBM Tivoli Storage Manager / Spectrum Protect
IBM Tivoli Storage Manager / Spectrum Protect server part
Protect: EGTSMSRV> Protect: EGTSMSRV>def dom gpfstest ANR1500I Policy domain GPFSTEST defined. Protect: EGTSMSRV>def pol gpfstest standard ANR1510I Policy set STANDARD defined in policy domain GPFSTEST. Protect: EGTSMSRV>def mgmtc gpfstest standard gpfstest spacemgtechnique=automatic migdestination=EGGPFSSTGP MIGREQUIRESBkup=no Protect: EGTSMSRV>assign defmgmtc gpfstest standard gpfstest Protect: EGTSMSRV>act pol gpfstest standard ANR1553W DEFAULT Management class GPFSTEST in policy set GPFSTEST STANDARD does not have a BACKUP copygroup: files will not be backed up by default if this set is activated. ANR1554W DEFAULT Management class GPFSTEST in policy set GPFSTEST STANDARD does not have an ARCHIVE copygroup: files will not be archived by default if this set is activated. Do you wish to proceed? (Yes (Y)/No (N)) y ANR1553W DEFAULT Management class GPFSTEST in policy set GPFSTEST STANDARD does not have a BACKUP copygroup: files will not be backed up by default if this set is activated. ANR1554W DEFAULT Management class GPFSTEST in policy set GPFSTEST STANDARD does not have an ARCHIVE copygroup: files will not be archived by default if this set is activated. ANR1514I Policy set STANDARD activated in policy domain GPFSTEST. Protect: EGTSMSRV>q mgmtc gpfstest f=d Policy Domain Name: GPFSTEST Policy Set Name: ACTIVE Mgmt Class Name: GPFSTEST Default Mgmt Class ?: Yes Description: Space Management Technique: Automatic Auto-Migrate on Non-Use: 0 Migration Requires Backup?: No Migration Destination: EGGPFSSTGP Last Update by (administrator): SUPPORT Last Update Date/Time: 11/06/18 13:53:38 Managing profile: Changes Pending: No Policy Domain Name: GPFSTEST Policy Set Name: STANDARD Mgmt Class Name: GPFSTEST Default Mgmt Class ?: Yes Description: Space Management Technique: Automatic Auto-Migrate on Non-Use: 0 Migration Requires Backup?: No Migration Destination: EGGPFSSTGP Last Update by (administrator): SUPPORT Last Update Date/Time: 11/06/18 13:53:38 Managing profile: Changes Pending: No
IBM Tivoli Storage Manager / Spectrum Protect client side
sles12sp3:~ # export HSMINSTALLMODE=SCOUTFREE sles12sp3:~ # tar -xvf 8.1.6.0-TIV-TSMHSM-LinuxX86GPFS.tar README.htm README_api.htm README_hsm.htm TIVsm-API64.x86_64.rpm TIVsm-APIcit.x86_64.rpm TIVsm-BA.x86_64.rpm TIVsm-BAcit.x86_64.rpm TIVsm-HSM.x86_64.rpm gskcrypt64-8.0.50.86.linux.x86_64.rpm gskssl64-8.0.50.86.linux.x86_64.rpm sles12sp3:~ # rpm -ivh gskcrypt64-8.0.50.86.linux.x86_64.rpm gskssl64-8.0.50.86.linux.x86_64.rpm TIVsm-API64.x86_64.rpm TIVsm-BA.x86_64.rpm TIVsm-HSM.x86_64.rpm Preparing... ################################# [100%] Updating / installing... 1:gskcrypt64-8.0-50.86 ################################# [ 20%] 2:gskssl64-8.0-50.86 ################################# [ 40%] 3:TIVsm-API64-8.1.6-0 ################################# [ 60%] 4:TIVsm-BA-8.1.6-0 ################################# [ 80%] 5:TIVsm-HSM-8.1.6-0 ################################# [100%] cp: cannot stat '/etc/inittab': No such file or directory cp: cannot stat '/etc/inittab': No such file or directory grep: /etc/inittab.hsm: No such file or directory Creating systemd HSM service Created symlink from /etc/systemd/system/multi-user.target.wants/hsm.service to /usr/lib/systemd/system/hsm.service. INSTALLMODE = SCOUTFREE sles12sp3:~ # systemctl status hsm * hsm.service - HSM Service Loaded: loaded (/usr/lib/systemd/system/hsm.service; enabled; vendor preset: disabled) Active: active (running) since Tue 2018-11-06 13:35:14 CET; 3min 45s ago Main PID: 11378 (dsmwatchd) CGroup: /system.slice/hsm.service `-11378 /opt/tivoli/tsm/client/hsm/bin/dsmwatchd nodetach Nov 06 13:35:14 sles12sp3 systemd[1]: Started HSM Service. Nov 06 13:35:22 sles12sp3 dsmwatchd[11378]: HSM(pid:11378): start sles12sp3:~ # vi /opt/tivoli/tsm/client/ba/bin/dsm.opt sles12sp3:~ # cat /opt/tivoli/tsm/client/ba/bin/dsm.opt ************************************************************************ * * * Sample Client User Options file for UNIX (dsm.opt.smp) * ************************************************************************ * This file contains an option you can use to specify the * server to contact if more than one is defined in your client * system options file (dsm.sys). Copy dsm.opt.smp to dsm.opt. * If you enter a server name for the option below, remove the * leading asterisk (*). ************************************************************************ * SErvername A server name defined in the dsm.sys file SErvername hsm HSMGROUPEDMIGRATE YES HSMEXTOBJIDATTR YES LANGUAGE ENU HSMDISABLEAUTOMIGDAEMONS YES HSMENABLEIMMediatemigrate YES sles12sp3:~ # vi /opt/tivoli/tsm/client/ba/bin/dsm.sys sles12sp3:~ # cat /opt/tivoli/tsm/client/ba/bin/dsm.sys ************************************************************************ * * * Sample Client System Options file for UNIX (dsm.sys.smp) * ************************************************************************ * This file contains the minimum options required to get started * using the Backup-Archive Client. Copy dsm.sys.smp to dsm.sys. * In the dsm.sys file, enter the appropriate values for each option * listed below and remove the leading asterisk (*) for each one. * If your client node communicates with multiple servers, be * sure to add a stanza, beginning with the SERVERNAME option, for * each additional server. ************************************************************************ migrateserver hsm reconcileinterval 0 migfileexpiration 1 SErvername tsm COMMMethod TCPip passwordaccess generate errorlogname /opt/tivoli/tsm/client/ba/bin/dsmerror.log TCPServeraddress egtsmsrv.eg.local TCPPort 1500 * SSL Yes SErvername hsm COMMMethod TCPip TCPPort 1500 TCPServeraddress egtsmsrv.eg.local passwordaccess generate nodename gpfstest errorlogname /opt/tivoli/tsm/client/ba/bin/dsmerror_hsm.log sles12sp3:~ # ln -s /opt/tivoli/tsm/client/ba/bin/dsm.sys /opt/tivoli/tsm/client/api/bin64/dsm.sys sles12sp3:~ # ln -s /opt/tivoli/tsm/client/ba/bin/dsm.opt /opt/tivoli/tsm/client/api/bin64/dsm.opt sles12sp3:~ # dsmc IBM Spectrum Protect Command Line Backup-Archive Client Interface Client Version 8, Release 1, Level 6.0 Client date/time: 11/06/18 13:59:28 (c) Copyright by IBM Corporation and other(s) 1990, 2018. All Rights Reserved. Node Name: GPFSTEST Please enter your user id GPFSTEST>: Please enter password for user id "GPFSTEST": Session established with server EGTSMSRV: Linux/x86_64 Server Version 8, Release 1, Level 3.000 Server date/time: 11/06/18 13:59:33 Last access: 11/06/18 13:59:13 Protect> quit sles12sp3:~ # mmumount gpfsfs Tue Nov 6 14:01:32 CET 2018: mmumount: Unmounting file systems ... sles12sp3:~ # mmchfs gpfsfs -z yes mmchfs: mmsdrfs propagation completed. sles12sp3:~ # mmmount all -a Tue Nov 6 14:02:00 CET 2018: mmmount: Mounting file systems ... sles12sp3:~ # mmlsfs all File system attributes for /dev/gpfsfs: ======================================= flag value description ------------------- ------------------------ ----------------------------------- -f 8192 Minimum fragment (subblock) size in bytes -i 4096 Inode size in bytes -I 32768 Indirect block size in bytes -m 1 Default number of metadata replicas -M 2 Maximum number of metadata replicas -r 1 Default number of data replicas -R 2 Maximum number of data replicas -j cluster Block allocation type -D nfs4 File locking semantics in effect -k all ACL semantics in effect -n 32 Estimated number of nodes that will mount file system -B 4194304 Block size -Q none Quotas accounting enabled none Quotas enforced none Default quotas enabled --perfileset-quota no Per-fileset quota enforcement --filesetdf no Fileset df enabled? -V 19.01 (5.0.1.0) File system version --create-time Tue Nov 6 13:26:12 2018 File system creation time -z yes Is DMAPI enabled? -L 33554432 Logfile size -E yes Exact mtime mount option -S relatime Suppress atime mount option -K whenpossible Strict replica allocation option --fastea yes Fast external attributes enabled? --encryption no Encryption enabled? --inode-limit 66560 Maximum number of inodes --log-replicas 0 Number of log replicas --is4KAligned yes is4KAligned? --rapid-repair yes rapidRepair enabled? --write-cache-threshold 0 HAWC Threshold (max 65536) --subblocks-per-full-block 512 Number of subblocks per full block -P system Disk storage pools in file system --file-audit-log no File Audit Logging enabled? --maintenance-mode no Maintenance Mode enabled? -d nsd1 Disks in file system -A yes Automatic mount option -o none Additional mount options -T /gpfs Default mount point --mount-priority 0 Mount priority sles12sp3:~ # dsmmigfs add /gpfs/ IBM Spectrum Protect Command Line Space Management Client Interface Client Version 8, Release 1, Level 6.0 Client date/time: 11/06/18 14:02:57 (c) Copyright by IBM Corporation and other(s) 1990, 2018. All Rights Reserved. Adding HSM support for /gpfs ... ANS9087I Space management is successfully added to file system /gpfs. sles12sp3:~ # dsmmigfs query -detail IBM Spectrum Protect Command Line Space Management Client Interface Client Version 8, Release 1, Level 6.0 Client date/time: 11/06/18 14:03:13 (c) Copyright by IBM Corporation and other(s) 1990, 2018. All Rights Reserved. The local node has Node ID: 1 The failover environment is deactivated on the local node. The recall distribution is enabled. The monitoring of local space management daemons is active. File System Name: /gpfs High Threshold: 90 Low Threshold: 80 Premig Percentage: 10 Quota: 16384 Stub Size: 0 Read Starts Recall: no Preview Size: 0 Server Name: HSM Max Candidates: 100 Max Files: 0 Read Event Timeout: 600 Stream Seq: 0 Min Partial Rec Size: 0 Min Stream File Size: 0 Min Mig File Size: 0 Inline Copy Mode: MIG Preferred Node: sles12sp3 Node ID: 1 Owner Node: sles12sp3 Node ID: 1 sles12sp3:~ # dsmmigfs start IBM Spectrum Protect Command Line Space Management Client Interface Client Version 8, Release 1, Level 6.0 Client date/time: 11/06/18 14:03:20 (c) Copyright by IBM Corporation and other(s) 1990, 2018. All Rights Reserved.
Manual HSM movement test
sles12sp3:~ # dsmls /gpfs/teszt/ IBM Spectrum Protect Command Line Space Management Client Interface Client Version 8, Release 1, Level 6.0 Client date/time: 11/06/18 14:16:38 (c) Copyright by IBM Corporation and other(s) 1990, 2018. All Rights Reserved. ActS ResS ResB FSt FName dir> 4096 1 - teszt/ /gpfs/teszt: 102400 102400 104 p testfile sles12sp3:~ # dsmmigrate /gpfs/teszt/testfile IBM Spectrum Protect Command Line Space Management Client Interface Client Version 8, Release 1, Level 6.0 Client date/time: 11/06/18 14:16:50 (c) Copyright by IBM Corporation and other(s) 1990, 2018. All Rights Reserved. sles12sp3:~ # dsmls /gpfs/teszt/ IBM Spectrum Protect Command Line Space Management Client Interface Client Version 8, Release 1, Level 6.0 Client date/time: 11/06/18 14:16:55 (c) Copyright by IBM Corporation and other(s) 1990, 2018. All Rights Reserved. ActS ResS ResB FSt FName dir> 4096 1 - teszt/ /gpfs/teszt: 102400 0 0 m testfile sles12sp3:~ # dsmrecall /gpfs/teszt/testfile IBM Spectrum Protect Command Line Space Management Client Interface Client Version 8, Release 1, Level 6.0 Client date/time: 11/06/18 14:17:00 (c) Copyright by IBM Corporation and other(s) 1990, 2018. All Rights Reserved. sles12sp3:~ # dsmls /gpfs/teszt/ IBM Spectrum Protect Command Line Space Management Client Interface Client Version 8, Release 1, Level 6.0 Client date/time: 11/06/18 14:17:03 (c) Copyright by IBM Corporation and other(s) 1990, 2018. All Rights Reserved. ActS ResS ResB FSt FName dir> 4096 1 - teszt/ /gpfs/teszt: 102400 102400 4096 p testfile
Automatic GPFS rules
sles12sp3:~ # vim /root/gpfs.rules sles12sp3:~ # cat /root/gpfs.rules RULE EXTERNAL POOL 'hsm' EXEC '/root/bin/hsmScript' OPTS '-server hsm' RULE 'Exclude HSM System Files' EXCLUDE WHERE PATH_NAME LIKE '%/.%' RULE 'Exclude GPFS System Files' EXCLUDE WHERE PATH_NAME LIKE '%/automountdir%' RULE 'Mig2HSM' MIGRATE FROM POOL 'system' THRESHOLD(40,20) WEIGHT(KB_ALLOCATED) TO POOL 'hsm' sles12sp3:~ # vim /root/bin/hsmScript sles12sp3:~ # chmod +x /root/bin/hsmScript sles12sp3:~ # cat /root/bin/hsmScript
echo start > /tmp/hsmScript.log
echo "[$1 $2 $3]" >> /tmp/hsmScript.log
echo
case $1 in
LIST)
ls $2 ;;
MIGRATE)
cp $2 /tmp/tmpFile
cat /tmp/tmpFile >> /tmp/hsmScript.log
cut -d" " -f7 /tmp/tmpFile >$2
echo $2 >> /tmp/hsmScript.log
dsmmigrate -Detail -filelist=$2 >> /tmp/hsmScript.log ;;
TEST)
echo "==== HSMtest "$2 $3 >> /tmp/hsmScript.log ;;
*) ;;
esac
echo end >> /tmp/hsmScript.log
sles12sp3:~ # mmapplypolicy /gpfs -P /root/gpfs.rules [I] GPFS Current Data Pool Utilization in KB and % Pool_Name KB_Occupied KB_Total Percent_Occupied system 1740800 16777216 10.375976562% [I] 4071 of 66560 inodes used: 6.116286%. [I] Loaded policy rules from /root/gpfs.rules. Evaluating policy rules with CURRENT_TIMESTAMP = 2018-11-06@13:22:43 UTC Parsed 4 policy rules. RULE EXTERNAL POOL 'hsm' EXEC '/root/bin/hsmScript' OPTS '-server hsm' RULE 'Exclude HSM System Files' EXCLUDE WHERE PATH_NAME LIKE '%/.%' RULE 'Exclude GPFS System Files' EXCLUDE WHERE PATH_NAME LIKE '%/automountdir%' RULE 'Mig2HSM' MIGRATE FROM POOL 'system' THRESHOLD(40,20) WEIGHT(KB_ALLOCATED) TO POOL 'hsm' [I] 2018-11-06@13:22:43.670 Directory entries scanned: 37. [I] Directories scan: 26 files, 11 directories, 0 other objects, 0 'skipped' files and/or errors. [I] 2018-11-06@13:22:43.673 Sorting 37 file list records. [I] Inodes scan: 21 files, 8 directories, 8 other objects, 8 'skipped' files and/or errors. [I] 2018-11-06@13:22:44.551 Policy evaluation. 37 files scanned. [I] 2018-11-06@13:22:44.554 Sorting 0 candidate file list records. [I] 2018-11-06@13:22:44.554 Choosing candidate files. 0 records scanned. [I] Summary of Rule Applicability and File Choices: Rule# Hit_Cnt KB_Hit Chosen KB_Chosen KB_Ill Rule 0 20 327696 0 0 0 RULE 'Exclude HSM System Files' EXCLUDE WHERE(.) 1 0 0 0 0 0 RULE 'Exclude GPFS System Files' EXCLUDE WHERE(.) 2 0 0 0 0 0 RULE 'Mig2HSM' MIGRATE FROM POOL 'system' THRESHOLD(40.000000,20.000000) WEIGHT(.) TO POOL 'hsm' [I] Filesystem objects with no applicable rules: 9. Predicted Data Pool Utilization in KB and %: Pool_Name KB_Occupied KB_Total Percent_Occupied system 1740800 16777216 10.375976562% [I] 2018-11-06@13:22:44.556 Policy execution. 0 files dispatched. [I] A total of 0 files have been migrated, deleted or processed by an EXTERNAL EXEC/script; 0 'skipped' files and/or errors. sles12sp3:~ # mmlspolicy gpfsfs No policy file was installed for file system 'gpfsfs'. Data will be stored in pool 'system'. sles12sp3:~ # mmchpolicy gpfsfs /root/gpfs.rules -t hsmtest -I test Validated policy 'hsmtest': Parsed 4 policy rules. Policy `hsmtest' installed and broadcast to all nodes. sles12sp3:~ # mmchpolicy gpfsfs /root/gpfs.rules -t hsmtest -I yes Validated policy 'hsmtest': Parsed 4 policy rules. Policy `hsmtest' installed and broadcast to all nodes. sles12sp3:~ # mmlspolicy gpfsfs Policy for file system '/dev/gpfsfs': Installed by root@sles12sp3 on Tue Nov 6 14:25:41 2018. First line of policy 'hsmtest' is: RULE EXTERNAL POOL 'hsm' EXEC '/root/bin/hsmScript' OPTS '-server hsm' sles12sp3:~ # mmlspolicy gpfsfs -L RULE EXTERNAL POOL 'hsm' EXEC '/root/bin/hsmScript' OPTS '-server hsm' RULE 'Exclude HSM System Files' EXCLUDE WHERE PATH_NAME LIKE '%/.%' RULE 'Exclude GPFS System Files' EXCLUDE WHERE PATH_NAME LIKE '%/automountdir%' RULE 'Mig2HSM' MIGRATE FROM POOL 'system' THRESHOLD(40,20) WEIGHT(KB_ALLOCATED) TO POOL 'hsm' sles12sp3:~ # mmlsconfig Configuration data for cluster gpfstest.sles12sp3: -------------------------------------------------- clusterName gpfstest.sles12sp3 clusterId 13069010717861054032 dmapiFileHandleSize 32 minReleaseLevel 5.0.1.0 ccrEnabled yes cipherList AUTHONLY adminMode central File systems in cluster gpfstest.sles12sp3: ------------------------------------------- /dev/gpfsfs sles12sp3:~ # mmchconfig enableLowspaceEvents=yes sles12sp3:~ # mmchconfig autoload=yes sles12sp3:~ # mmlsconfig Configuration data for cluster gpfstest.sles12sp3: -------------------------------------------------- clusterName gpfstest.sles12sp3 clusterId 13069010717861054032 dmapiFileHandleSize 32 minReleaseLevel 5.0.1.0 ccrEnabled yes cipherList AUTHONLY enableLowspaceEvents yes autoload yes adminMode central File systems in cluster gpfstest.sles12sp3: ------------------------------------------- /dev/gpfsfs sles12sp3:~ # mmlscallback TSMHSMinitfailover.1.0 command = /usr/bin/dsmmigfs event = nodeLeave node = sles12sp3 parms = recover %eventNode sles12sp3:~ # mmaddcallback MIGRATION --command /usr/lpp/mmfs/bin/mmstartpolicy --event lowDiskSpace --parms "%eventName %fsName" sles12sp3:~ # mmlscallback TSMHSMinitfailover.1.0 command = /usr/bin/dsmmigfs event = nodeLeave node = sles12sp3 parms = recover %eventNode MIGRATION command = /usr/lpp/mmfs/bin/mmstartpolicy event = lowDiskSpace parms = %eventName %fsName