Thursday, May 29, 2014

AIX HISTORY


AIX history:

IBM had 2 discrete Power Architecture hardware lines, based on different Operating Systems:
    - OS/400, later i5/OS, more later IBM i
    - AIX (on the same hardware it is possible to run Linux as well)


I. 1986-1990 (AS/400 - IBM RT):
In 1986 AIX Version 1 had been introduced for the IBM 6150 RT workstation, which was based on UNIX.
In 1987 for the other product line: OS/400 (later i5/OS and IBM i), the platform (hardware) AS/400 had been released.


II. 1990-1999 (RS/6000):
Among other variants, IBM later produced AIX Version 3 (also known as AIX/6000), for their IBM POWER-based RS/6000 platform. The RS/6000 family replaced the IBM RT computer platform in February 1990, and was the first computer line to see the use of IBM's POWER and PowerPC based microprocessors. Since 1990, AIX has served as the primary operating system for the RS/6000 series.

AIX Version 4, introduced in 1994, added symmetric multiprocessing with the introduction of the first RS/6000 SMP servers and continued to evolve through the 1990s, culminating with AIX 4.3.3 in 1999. RS/6000 was renamed eServer pSeries in October 2000.


III. 2000-2004 (eServer pseries):
IBM eServer was a family of computer servers from IBM Corporation. Announced in the year 2000, it combined the various IBM server brands (AS/400, RS/6000...) under one brand.

The various sub-brands were at the same time rebranded from:
    - IBM AS/400 to IBM eServer iSeries, i for Integrated.
    - IBM RS/6000 to IBM eServer pSeries, p for POWER
    ...

They merged to use essentially the same hardware platform in 2001/2002 with the introduction of the POWER4 processor. After that, there was little difference between both the "p" and the "i" hardware; the only differences were in the software and services offerings. AIX 5.2 was introduced in October 2002.


IV. 2004-2008 (IBM system i and p):
In 2005 announced a new brand, 'IBM System' as an umbrella for all IBM server and storage brands:
    - IBM eServer iSeries became IBM System i
    - IBM eServer pSeries became IBM System p
    ...

With the introduction of the POWER5 processor in 2004, even the product numbering was synchronized. The System i5 570 was virtually identical to the System p5 570. AIX 5.3 was intoduced in August of 2004. In May 2007 IBM launched its POWER6 and AIX 6.1 in November 2007.


V. 2008-2010 (Power Systems):
In April of 2008, IBM officially merged the two lines of servers and workstations under the same name, Power Systems, with identical hardware and a choice of operating systems, software and service contracts.

Power Systems is the name of IBM's unified Power Architecture-based server line, merging both System i and System p server platforms, and running either IBM i (formerly i5/OS and OS/400), AIX or Linux operating systems. Power Systems was announced April 2, 2008.

In February of 2010, IBM announced new models with new POWER7 microprocessor and AIX 7.1 in September 2010.

POWER8 is under development at this date.

AIV -VG


VOLUME GROUP
When you install a system, the first volume group (VG) is created. It is called the rootvg. Your rootvg volume group is a base set of logical volumes required to start the system. It includes paging space, the journal log, boot data, and dump storage, each on its own separate logical volume.

A normal VG is limited to 32512 physical partitions. (32 physical volumes, each with 1016 partitions)
you can change it with: chvg -t4 bbvg (the factor is 4, which means: maximum partitions:4064 (instead of 1016), max disks:8 (instead of 32))


How do I know if my volume group is normal, big, or scalable?
Run the lsvg command on the volume group and look at the value for MAX PVs. The value is 32 for normal, 128 for big, and 1024 for scalable volume group.

VG type     Maximum PVs    Maximum LVs    Maximum PPs per VG    Maximum PP size
Normal VG     32              256            32,512 (1016 * 32)      1 GB
Big VG        128             512            130,048 (1016 * 128)    1 GB
Scalable VG   1024            4096           2,097,152               128 GB


If a physical volume is part of a volume group, it contains 2 additional reserved areas. One area contains both the VGSA and the VGDA, and this area is started from the first 128 reserved sectors (blocks) on the disk. The other area is at the end of the disk, and is reserved as a relocation pool for bad blocks.

VGDA (Volume Group Descriptor Area)
It is an area on the hard disk (PV) that contains information about the entire volume group. There is at least one VGDA per physical volume, one or two copies per disk. It contains physical volume list (PVIDs), logical volume list (LVIDs), physical partition map (maps lps to pps)

# lqueryvg -tAp hdisk0                                <--look into the VGDA (-A:all info, -t: tagged, without it only numbers)
Max LVs:        256
PP Size:        27                                    <--exponent of 2:2 to 7=128MB
Free PPs:       698
LV count:       11
PV count:       2
Total VGDAs:    3
Conc Allowed:   0
MAX PPs per PV  2032
MAX PVs:        16
Quorum (disk):  0
Quorum (dd):    0
Auto Varyon ?:  1
Conc Autovaryo  0
Varied on Conc  0
Logical:        00cebffe00004c000000010363f50ac5.1   hd5 1       <--1: count of mirror copies (00cebff...c5 is the VGID)
                00cebffe00004c000000010363f50ac5.2   hd6 1
                00cebffe00004c000000010363f50ac5.3   hd8 1
                ...
Physical:       00cebffe63f500ee                2   0            <--2:VGDA count 0:code for its state (active, missing, removed)
                00cebffe63f50314                1   0            (The sum of VGDA count should be the same as the Total VGDAs)
Total PPs:      1092
LTG size:       128
...
Max PPs:        32512
-----------------------

VGSA (Volume Group Status Area)
The VGSAs are always present, but used with mirroring only. Needed to track the state of mirror copies, that means whether synchronized or stale. Per-disk stucure, but twice on each disk.


Quorum
Non-rootvg volume groups can be taken offline and brought online by a process called varying on and varying off a volume group. The system checks the availability of all VGDAs for a particular volume group to determine if a volume group is going to be varied on or off.
When attempting to vary on a volume group, the system checks for a quorum of the VGDA to be available. A quorum is equal to 51 percent or more of the VGDAs available. If it can find a quorum, the VG will be varied on; if not, it will not make the volume group available.
Turning off the quorum does not allow a varyonvg without a quorum, it just prevents the closing of an active vg when losing its quorum. (so forced varyon may needed: varyonvg -f VGname)

After turning it off (chvg -Qn VGname) it is in effect immediately.


LTG
LTG is the maximum transfer size of a logical volume (volume group?).
At 5.3 and 6.1 AIX dynamically sets LTG size (calculated at each volume group activation).
LTG size can be changed with: varyonvg -M<LTG size>
(The chvg -L has no effect on volume groups created on 5.3 or later (it was used on 5.2)
To display the LTG size of a disk: /usr/sbin/lquerypv -M <hdisk#>


lsvg                      lists the volume groups that are on the system
lsvg -o                   lists all volume groups that are varied on
lsvg -o | lsvg -ip        lists pvs of online vgs
lsvg rootvg               gives details about the vg (lsvg -L <vgname>, will doe not wait for the lock release (useful during mirroring))
lsvg -l rootvg            info about all logical volumes that are part of a vg
lsvg -M rootvg            lists all PV, LV, PP deatils of a vg (PVname:PPnum LVname: LPnum :Copynum)
lsvg -p rootvg            display all physical volumes that are part of the volume group
lsvg -n <hdisk>           shows vg infos, but it is read from the VGDA on the specified disk (it is useful to compare it with different disks)

mkvg -s 2 -y testvg hdisk13    creates a volume group
    -s                    specify the physical partition size
    -y                    indicate the name of the new vg

chvg                      changes the characteristics of a volume group
chvg -u <vgname>          unlocks the volume group (if a command core dumping, or the system crashed and vg is left in locked state)
                          (Many LVM commands place a lock into the ODM to prevent other commands working on the same time.)
extendvg rootvg hdisk7    adds hdisk7 to rootvg (-f forces it: extendvg -f ...)
reducevg rootvg hdisk7    tries to delete hdisk7 (the vg must be varied on) (reducevg -f ... :force it)
                          (it will fail if the vg contains open logical volumes)
reducevg datavg <pvid>    reducevg command can use pvid as well (it is useful, if disk already removed from ODM, but VGDA still exist)


syncvg                    synchronizes stale physical partitions (varyonvg better, becaue first it reestablis reservation then sync in backg.)
varyonvg rootvg           makes the vg available (-f force a vg to be online if it does not have the quorum of available disks)
                          (varyonvg acts as a self-repair program for VGDAs, it does a syncvg as well)
varyoffvg rootvg          deactivate a volume group
mirrorvg -S P01vg hdisk1  mirroring rootvg to hdisk1 (checking: lsvg P01vg | grep STALE) (-S: background sync)
                          (mirrorvg -m rootvg hdisk1 <--m makes an exact copy, pp mapping will be identical, it is advised this way)
unmirrorvg testvg1 hdisk0 hdisk1 remove mirrors on the vg from the specified disks

exportvg avg              removes the VGs definition out of the ODM and /etc/filesystems (for ODM problems after importvg will fix it)
importvg -y avg hdisk8    makes the previously exported vg known to the system (hdisk8, is any disk belongs to the vg)

reorgvg                   rearranges physical partitions within the vg to conform with the placement policy (outer edge...) for the lv.
                          (For this 1 free pp is needed, and the relocatable flag for lvs must be set to 'y': chlv -r...)

getlvodm -j <hdisk>       get the vgid for the hdisk from the odm
getlvodm -t <vgid>        get the vg name for the vgid from the odm
getlvodm -v <vgname>      get the vgid for the vg name from the odm

getlvodm -p <hdisk>       get the pvid for the hdisk from the odm
getlvodm -g <pvid>        get the hdisk for the pvid from the odm
lqueryvg -tcAp <hdisk>    get all the vgid and pvid information for the vg from the vgda (directly from the disk)
                          (you can compare the disk with odm: getlvodm <-> lqueryvg)


synclvodm <vgname>        synchronizes or rebuilds the lvcb, the device configuration database, and the vgdas on the physical volumes
redefinevg                it helps regain the basic ODM informations if those are corrupted (redefinevg -d hdisk0 rootvg)
readvgda hdisk40          shows details from the disk

Physical Volume states (and quorum):
lsvg -p VGName            <--shows pv states (not devices states!)
    active                <--during varyonvg disk can be accessed
    missing               <--during varyonvg disk can not be accessed + quorum is available
                          (after disk repair varyonvg VGName will put in active state)
    removed               <--no disk access during varyonvg + quorum is not available --> you issue varyonvg -f VGName
                          (after force varyonvg in the above case, PV state will be removed, and it won't be used for quorum)
                          (to put back in active state, first we have to tell the system the failure is over:)
                          (chpv -va hdiskX, this defines the disk as active, and after that varyonvg will synchronize)


---------------------------------------

Mirroring rootvg (i.e after disk replacement):
1. disk replaced -> cfgmgr           <--it will find the new disk (i.e. hdisk1)
2. extendvg rootvg hdisk1            <--sometimes extendvg -f rootvg...
(3. chvg -Qn rootvg)                 <--only if quorum setting has not yet been disabled, because this needs a restart
4. mirrorvg -s rootvg                <--add mirror for rootvg (-s: synchronization will not be done)
5. syncvg -v rootvg                  <--synchronize the new copy (lsvg rootvg | grep STALE)
6. bosboot -a                        <--we changed the system so create boot image (-a: create complete boot image and device)
                                     (hd5 is mirrorred, no need to do it for each disk. ie. bosboot -ad hdisk0)
7. bootlist -m normal hdisk0 hdisk1  <--set normal bootlist
8. bootlist -m service hdisk0 hdisk1 <--set bootlist when we want to boot into service mode
(9. shutdown -Fr)                    <--this is needed if quorum has been disabled
10.bootinfo -b                       <--shows the disk  which was used for boot

---------------------------------------

Export/Import:
1. node A: umount all fs -> varyoffvg myvg
2. node A: exportvg myvg            <--ODM cleared
3. node B: importvg -y myvg hdisk3  <-- -y: vgname, if omitted a new vg will be created (if needed varyonvg -> mount fs)
if fs already exists:
    1. umount the old one and mount the imported one with this: mount -o log=/dev/loglv01 -V jfs2 /dev/lv24 /home/michael
    (these details have to added in the mount command, and these can be retreived from LVCB: getlvcb lv24 -At)

    2. vi /etc/filesystems, create a second stanza for the imported filesystems with a new mountpoint.

---------------------------------------

VG problems with ODM:

if varyoff possible:
1. write down VGs name, major number, a disk
2. exportvg VGname
3. importvg -V MajorNum -y VGname hdiskX

if varyoff not possible:
1. write down VGs name, major number, a disk
2. export the vg by the backdoor, using: odmdelete
3. re-import vg (may produce warnings, but works)
(it is not necessary to umount fs or stop processes)

---------------------------------------

Changing factor value (chvg -t) of a VG:

A normal or a big vg has the following limitations after creation:
MAX PPs per VG = MAX PVs * MAX PPS per PV)

                   Normal       Big
MAX PPs per VG:    32512        130048
MAX PPs per PV:    1016         1016
MAX PVs:           32           128

If we want to extend the vg with a disk, which is so large that we would have more than 1016 PPs on that disk, we will receive:
root@bb_lpar: / # extendvg bbvg hdisk4
0516-1162 extendvg: Warning, The Physical Partition Size of 4 requires the
        creation of 1024 partitions for hdisk4.  The limitation for volume group
        bbvg is 1016 physical partitions per physical volume.  Use chvg command
        with -t option to attempt to change the maximum Physical Partitions per
        Physical volume for this volume group.

If we change the factor value of the VG, then extendvg will be possible:
root@bb_lpar: / # chvg -t 2 bbvg
0516-1164 chvg: Volume group bbvg changed.  With given characteristics bbvg
        can include up to 16 physical volumes with 2032 physical partitions each.

Calculation:
Normal VG: 32/factor = new value of MAX PVs
Big VG: 128/factor= new value of MAX PVs

-t   PPs per PV        MAX PV (Normal)    MAX PV (Big)
1    1016              32                 128
2    2032              16                 64
3    3048              10                 42
4    4064              8                  32
5    5080              6                  25
...

"chvg -t" can be used online either increasing or decreasing the value of the factor.

---------------------------------------

Changing Normal VG to Big VG:

If you reached the MAX PV limit of a Normal VG and playing with the factor (chvg -t) is not possible anymore you can convert it to Big VG.
It is an online activity, but there must be free PPs on each physical volume, because VGDA will be expanded on all disks:

root@bb_lpar: / # lsvg -p bbvg
bbvg:
PV_NAME           PV STATE          TOTAL PPs   FREE PPs    FREE DISTRIBUTION
hdisk2            active            511         2           02..00..00..00..00
hdisk3            active            511         23          00..00..00..00..23
hdisk4            active            1023        0           00..00..00..00..00

root@bb_lpar: / # chvg -B bbvg
0516-1214 chvg: Not enough free physical partitions exist on hdisk4 for the
        expansion of the volume group descriptor area.  Migrate/reorganize to free up
        2 partitions and run chvg again.

In this case we have to migrate 2 PPs from hdisk4 to hdsik3 (so 2 PPs will be freed up on hdisk4):

root@bb_lpar: / # lspv -M hdisk4
hdisk4:1        bblv:920
hdisk4:2        bblv:921

hdisk4:3        bblv:922
hdisk4:4        bblv:923
hdisk4:5        bblv:924
...

root@bb_lpar: / # lspv -M hdisk3
hdisk3:484      bblv:3040
hdisk3:485      bblv:3041
hdisk3:486      bblv:3042
hdisk3:487      bblv:1
hdisk3:488      bblv:2
hdisk3:489-511

root@bb_lpar: / # migratelp bblv/920 hdisk3/489
root@bb_lpar: / # migratelp bblv/921 hdisk3/490

root@bb_lpar: / # lsvg -p bbvg
bbvg:
PV_NAME           PV STATE          TOTAL PPs   FREE PPs    FREE DISTRIBUTION
hdisk2            active            511         2           02..00..00..00..00
hdisk3            active            511         21          00..00..00..00..21
hdisk4            active            1023        2           02..00..00..00..00

If we try again changing to Big VG, now it is successful:
root@bb_lpar: / # chvg -B bbvg
0516-1216 chvg: Physical partitions are being migrated for volume group
        descriptor area expansion.  Please wait.
0516-1164 chvg: Volume group bbvg2 changed.  With given characteristics bbvg2
        can include up to 128 physical volumes with 1016 physical partitions each.

If you check again, freed up PPs has been used:
root@bb_lpar: / # lsvg -p bbvg
bbvg:
PV_NAME           PV STATE          TOTAL PPs   FREE PPs    FREE DISTRIBUTION
hdisk2            active            509         0           00..00..00..00..00
hdisk3            active            509         17          00..00..00..00..17
hdisk4            active            1021        0           00..00..00..00..00

---------------------------------------

Changing Normal (or Big) VG to Scalable VG:

If you reached the MAX PV limit of a Normal or a Big VG and playing with the factor (chvg -t) is not possible anymore you can convert that VG to Scalable VG. A Scalable VG allows a maximum of 1024 PVs and 4096 LVs and a very big advantage that the maximum number of PPs applies to the entire VG and is no longer defined on a per disk basis.

!!!Converting to Scalable VG is an offline activity (varyoffvg), and there must be free PPs on each physical volume, because VGDA will be expanded on all disks.

root@bb_lpar: / # chvg -G bbvg
0516-1707 chvg: The volume group must be varied off during conversion to
        scalable volume group format.

root@bb_lpar: / # varyoffvg bbvg
root@bb_lpar: / # chvg -G bbvg
0516-1214 chvg: Not enough free physical partitions exist on hdisk2 for the
        expansion of the volume group descriptor area.  Migrate/reorganize to free up
        18 partitions and run chvg again.


After migrating some lps to free up required PPs (in this case it was 18), then changing to Scalable VG is successful:
root@bb_lpar: / # chvg -G bbvg
0516-1224 chvg: WARNING, once this operation is completed, volume group bbvg
        cannot be imported into AIX 5.2 or lower versions. Continue (y/n) ?
...
0516-1712 chvg: Volume group bbvg changed.  bbvg can include up to 1024 physical volumes with 2097152 total physical partitions in the volume group.

---------------------------------------

0516-008 varyonvg: LVM system call returned an unknown error code (2).
solution: export LDR_CNTRL=MAXDATA=0x80000000@DSA (check /etc/environment if LDR_CNTRL has a value, which is causing the trouble)

---------------------------------------

If VG cannot be created:
root@aix21c: / # mkvg -y tvg hdisk29
0516-1376 mkvg: Physical volume contains a VERITAS volume group.
0516-1397 mkvg: The physical volume hdisk29, will not be added to
the volume group.
0516-862 mkvg: Unable to create volume group.
root@aixc: / # chpv -C hdisk29        <--clears owning volume manager from a disk, after this mkvg was successful

 ---------------------------------------

root@aix1: /root # importvg -L testvg -n hdiskpower12
0516-022 : Illegal parameter or structure value.
0516-780 importvg: Unable to import volume group from hdiskpower12.


For me the solution was:
there was no pvid on the disk, after adding it (chdev -l hdiskpower12 -a pv=yes) it was OK
---------------------------------------


Reorgvg log files, and how it is working:

reorgvg activity is logged in lvmcfg:
root@bb_lpar: / # alog -ot lvmcfg | tail -3
[S 17039512 6750244 10/23/11-12:39:05:781 reorgvg.sh 580] reorgvg bbvg bb1lv
[S 7405650 17039512 10/23/11-12:39:06:689 migfix.c 168] migfix /tmp/.workdir.9699494.17039512_1/freemap17039512 /tmp/.workdir.9699494.17039512_1/migrate17039512 /tmp/.workdir.9699494.17039512_1/lvm_moves17039512
[E 17039512 47:320 reorgvg.sh 23] reorgvg: exited with rc=0

Field of these lines:
S - Start, E - End; PID, PPID; TIMESTAMP

At E (end) line shows how long reorgvg was running (in second:milliseconds):
47:320 = 47s 320ms


for a long running reorgvg, you can see it's status:

1. check the working dir of reorgvg
root@aixdb2: /root # alog -ot lvmcfg | tail -3 | grep workdir
[S 5226546 5288122 10/22/11-13:55:11:001 migfix.c 165] migfix /tmp/.workdir.4935912.5288122_1/freemap5288122 /tmp/.workdir.4935912.5288122_1/migrate5288122 /tmp/.workdir.4935912.5288122_1/lvm_moves5288122


2. check lvm_moves file in that dir (we will need the path of this file):
root@aixdb2: /root # ls -l /tmp/.workdir.4935912.5288122_1 | grep lvm_moves
-rw-------    1 root     system      1341300 Oct 22 13:55 lvm_moves5288122

(it contains all the lp migartions, and reorgvg goes through on this file, line by line)


3. check the process of reorgvg:
root@aixdb2: /root # ps -ef | grep reorgvg
    root 5288122 5013742   0 13:52:16  pts/2  0:12 /bin/ksh /usr/sbin/reorgvg P_NAVISvg p_datlv

root@aixdb2: /root # ps -fT 5288122
 CMD
 /bin/ksh /usr/sbin/reorgvg P_NAVISvg p_datlv
 |\--lmigratepp -g 00c0ad0200004c000000012ce4ad7285 -p 00c80ef201f81fa6 -n 1183 -P 00c0ad021d62f017 -N 1565
  \--awk -F: {print "-p "$1" -n "$2" -P "$3" -N "$4 }

(lmigratepp shows: -g VGid -p SourcePVid -n SourcePPnumber -P DestinationPVid -N DestinationPPnumber)

lmigratepp shows the actual PP which is migrated at this moment
(if you ask few seconds later it will show the next PP which is migrated, and it uses the lvm_moves file)

4. check the line number of the PP which is being migrated at this moment:
(now the ps command in step 3 is extended with the content of the lvm_moves file)

root@aixdb2: /root # grep -n `ps -fT 5288122|grep migr|awk '{print $12":"$14}'` /tmp/.workdir.4935912.5288122_1/lvm_moves5288122
17943:00c0ad021d66f58b:205:00c0ad021d612cda:1259
17944:00c80ef24b619875:486:00c0ad021d66f58b:205

you can compare the above line numbers (17943, 17944) to how many lines lvm_moves file has.
root@aixdb2: /root # cat /tmp/.workdir.4935912.5288122_1/lvm_moves5288122 | wc -l
   31536

It shows that from 31536 lp migrations we are at this moment at 17943.

---------------------------------------

0516-304 : Unable to find device id 00080e82dfb5a427 in the Device
        Configuration Database.


If a disk has been deleted (rmdev) somehow, but from the vg it was not removed:
root@bb_lpar: / # lsvg -p newvg
newvg:
PV_NAME           PV STATE          TOTAL PPs   FREE PPs    FREE DISTRIBUTION
hdisk2            active            31          20          06..00..02..06..06
0516-304 : Unable to find device id 00080e82dfb5a427 in the Device
        Configuration Database.
00080e82dfb5a427  removed           31          31          07..06..06..06..06


VGDA still shows the missing disk is part of the vg:
root@bb_lpar: / # lqueryvg -tAp hdisk2
...
Physical:       00080e82dfab25bc                2   0
                00080e82dfb5a427                0   4

VGDA should be updated (on hdisk2) but it is possible only, if the PVID is used with reducevg:
root@bb_lpar: / # reducevg newvg 00080e82dfb5a427

---------------------------------------

If you run into not being able to access an hdiskpowerX disk, you may need to reset the reservation bit on it:

root@aix21: / # lqueryvg -tAp hdiskpower13
0516-024 lqueryvg: Unable to open physical volume.
        Either PV was not configured or could not be opened. Run diagnostics.

root@aix21: / # lsvg -p sapvg
PTAsapvg:
PV_NAME           PV STATE          TOTAL PPs   FREE PPs    FREE DISTRIBUTION
hdiskpower12      active            811         0           00..00..00..00..00
hdiskpower13      missing           811         0           00..00..00..00..00
hdiskpower14      active            811         0           00..00..00..00..00

root@aix21: / # bootinfo -s hdiskpower13
0


Possible solution could be emcpowerreset:
root@aix21: / # /usr/lpp/EMC/Symmetrix/bin/emcpowerreset fscsi0 hdiskpower13

(after this varyonvg will bring back the disk into active state)

AIX PV


PV (Physical Volume)

When a disk drive is initially added to the system it is not yet accessible for operations. To be made accessible, it has to be assigned to a volume group, which means changing from a disk to a physical volume. The disk drive is assigned an identifier that is called the physical volume identifier (PVID).

The PVID is a combination of the machine's serial number and the date the PVID was generated and it is written on the first block of the device. The AIX LVM uses this number to identify specific disks. When a volume group is created, the member devices are simply a list of PVIDs.

The PVID for each device is stored in the ODM when the device is configured. The configuration program tries to read the first block of the device. If it succeeds and the first block contains a valid PVID, the PVID value is saved as an attribute in the ODM for that device. Once the PVID is set in the ODM, it can be seen in the output of the lspv command. (The LVM expects the PVIDs to be saved in the ODM, and it uses the ODM attribute when determining which device to open.)

In a configuration with multiple paths to the same logical devices, multiple hdisks show the same PVID in the output of lspv. When the LVM needs to open a device, it selects the first hdisk in the list with the matching PVID.

-----------------------------

Physical Volume states (not device states!):

active  - If a disk can be accessed during a varyonvg it gets a PV state of active.

missing - If a disk can not be accessed during a varyonvg, but quorum is available the failing disk gets a PV state missing.

          (after repairing it varyonvg wil bring it to active state)

removed - If a disk cannot be accessed during a varyonvg and the quorum of disks is not available you can issue varyonvg -f VGname

          (Before varyonvg -f always check the reason of the failure. If the pv appears to be permanently damaged use a forced varyonvg.)

          All physical volumes that are missing during this forced vary on will be changed to physical volume state removed.

          This means that all the VGDA and VGSA copies will be removed from these physical volumes.

          (After repiring the disk, first chpv -va diskname, it will bring back to active state and then varyonvg is needed for sync.)

The opposite of chpv -va is chpv -vr which brings the disk into the removed state. This works only when all logical volumes have been closed on the disk that will be defined as removed. Additionally, chpv -vr does not work when the quorum will be lost in the volume group after removing the disk

------------------------------

lsdev -Pc disk            displays supported storage

chdev -l hdisk7 -a pv=yes changes the disk device to a physical volume by assigning a PVID

                          (The command has no effect if the disk is already a physical volume)

chdev -l hdisk7 -a pv=clear    clears the PVID from the physical volume

lspv                      displays all physical volumes (vpath, hdisk), their PVIDs, their volume groups...

lspv hdisk0               detailed information about a phys. vol. (vg, pp size, free pp, logical volumes number)

lspv -l hdisk1            list of all the logical volumes on the physical volume

lspv -p hdisk2            displays a map of all physical partitions located on hdisk1

lspv -M hdisk1            shows which physical partitions are being used for specific logical volumes

bootinfo -s hdisk0        shows the size of a pv in MB

chpv -vr hdisk3           makes hdisk3 unavailable (pv state will be removed)

chpv -va hdisk3           makes hdisk3 available

chpv -c hdisk1            clears the bootrecord on hdisk1

The allocation permission for a physical volume determines if physical partitions located on that physical volume, which have not been allocated to a logical volume yet, can be allocated to logical volumes:

chpv -ay hdisk2           turns on the allocation permission

chpv -an hdisk2           turns off the allocation permission

migratepv hdisk1 hdisk5   migrates the data from hdisk1 to hdisk5 (moves all lvs, it can be done during normal system activity)

migratepv -l testlv hdisk1 hdisk5 migrates only testlv from hdisk1 to hdisk5

migratelp testlv/1/2 hdisk5/123 migrates the data from the second copy of the lp number 1 of lv to hdisk5 on pp 123

!!!check if lvmstat is enabled before running migratepv, migratelp, reorgvg, the command to check:lvmstat -v <vgname>!!!!

!!!because if it is, the system may crash, so before running those commands disable it: lvmstat -v <vgname> -d !!!!

lquerypv -h /dev/hdiskX   shows the disk header

lquerypv -M hdisk0        shows the LTG size for a physical disk

LTG size: Logical track group size is the maximum allowed transfer size for an I/O disk operation.

replacepv hdisk1 hdisk6   replace physical volume hdisk1 to hdisk6

mkdev -c disk -t 1200mb -s scsi -p scsi0 -w 6,0 -d creates a dummy hdisk (if it is needed to correct the sequence numbers)

mkdev -l hdiskX -p dummy -c disk -t hdisk -w 0000  the same as above (will give error, but creates it)

------------------------------

SAVING DISK HEADER:

at offset 128 the pvid starts.

To prevent a loss of the first 512 byte of the raw disk ( if something goes wrong with the chdev) use the following command to save

the current state of the sector “dd if=/dev/hdiskX of=hdiskX.header bs=512 count=1” if a command overwrites this sector you can

restore this sector with “dd if=hdiskX.header of=/dev/hdiskX bs=512 count=1” but if you make the copy be shure that the asm is

stopped, because the asm could make updates in this sector during shutdown.

------------------------------

Migrating a partition to another disk:

1.root@aix1: /root # lspv -M hdisk2

    hdisk2:1-14

    hdisk2:15       bblv1:1:1

    hdisk2:16       bblv1:2:1

    hdisk2:17       bblv1:3:1    <--we want to move bblv1:number 3 lp: first copy (the lv is mirrored, second copy is on another disk)

    hdisk2:18-67

2.root@aix1: /root # lspv -M hdisk4

    hdisk4:1-14

    hdisk4:15       loglv00:1

    hdisk4:16-67                 <--we want to move it to hdisk4 on physical partition 16

3.root@aix1: /root # migratelp bblv1/3/1 hdisk4/16

    migratelp: Mirror copy 1 of logical partition 3 of logical volume

      bblv1 migrated to physical partition 16 of hdisk4.

4.root@aix1: /root # lspv -M hdisk2

    hdisk2:1-14

    hdisk2:15       bblv1:1:1

    hdisk2:16       bblv1:2:1

    hdisk2:17-67                  <--physical partition 17 is free now

5.root@aix1: /root # lspv -M hdisk4

    hdisk4:1-14

    hdisk4:15       loglv00:1

    hdisk4:16       bblv1:3:1     <--it is here now

    hdisk4:17-67

AIX Mirror Pool


Mirror Pool

Starting with 6.1 TL2 so-called mirror pools were introduced that make it possible to divide the physical volumes of a scalable volume group into separate pools. Mirror Pools allow to group physical volumes in a scalable volume group so that a mirror copy of a logical volume can be restricted to only allocate partitions from physical volumes in a specified group.

A mirror pool is made up of one or more physical volumes. Each physical volume can only belong to one mirror pool at a time. When creating a logical volume, each copy of the lv being created can be assigned to a mirror pool.

Mirro Pool name can be up to 15 characters long and is unique to the volume group it belongs to. Therefore, two separate volume groups could use the same name for their mirror pools.

Any changes to mirror pool characteristics will not affect partitions allocated before the changes were made. The reorgvg command should be used after mirror pool changes are made to move the allocated partitions to conform to the mirror pool restrictions.

------------------------------------------------------

Strict Mirror Pool:
When strict mirror pools are enabled any logical volume created in the volume group must have mirror pools enabled for each copy of the logical volume. (If this is enabled all of the logical volumes in the volume group must use mirror pools.)

mkvg -M y -S <hdisk list>                    creating a vg with strict mirror pool
chvg -M y <vg name>                          turn on/off strict miror pool setting for a vg (chvg -M n... will turn off)
lsvg <vg name>                               shows mirro pool sctrictness (at the end of the output: MIRROR POOL STRICT: on)

------------------------------------------------------

Super strict Mirror Pool:
A super strict allocation policy can be set so that the partitions allocated for one mirror cannot share a physical volume with the partitions from another mirror. With this setting each mirror pool must contain at least one copy of each logical volume.

mkvg -M s -S <hdisk list>                    creating a vg with super strict setting
chvg -M s <vg name>                          turn on/off super strict setting for a vg (chvg -M n... will turn off)
lsvg <vg name>                               shows mirro pool sctrictness (at the end of the output: MIRROR POOL STRICT: super)

------------------------------------------------------

Creating/Removing/Renaming a Mirror Pool (adding disk to a Mirror Pool):

mkvg -S -p PoolA hdisk2 hdisk4 bbvg                     <--creating a new VG with mirror pool
extendvg -p PoolA bbvg hdisk6                           <--extending a VG with a disk (while adding disks to mirror pools)

If we already have a vg:
root@bb_lpar: / # lsvg -P bbvg                          <--lists the mirror pool that each physical volume in the volume group belongs to
Physical Volume   Mirror Pool
hdisk6            None
hdisk7            None

root@bb_lpar: / # chpv -p PoolA hdisk6                  <--creating mirror pool with the given disks (disks should be part of a vg)
root@bb_lpar: / # chpv -p PoolB hdisk7                  (or if the mirror pool already exists, it will add the specified disk to the pool)


root@bb_lpar: / # lsvg -P bbvg
Physical Volume   Mirror Pool
hdisk6            PoolA
hdisk7            PoolB

root@bb_lpar: / # chpv -P hdisk7                        <--removes the physical volume from the mirror pool

root@bb_lpar: / # lsvg -P bbvg
Physical Volume   Mirror Pool
hdisk6            PoolA
hdisk7            None

root@bb_lpar: / # chpv -m PoolC hdisk6                  <--changes the name of the mirror pool

root@bb_lpar: / # lsvg -P bbvg
Physical Volume   Mirror Pool
hdisk6            PoolC
hdisk7            None

------------------------------------------------------

Creating/Mirroring lv to a Mirror Pool:

mklv -c 2 -p copy1=PoolA -p copy2=PoolB bbvg 10       <--creates an lv (with default name:lv00) in the given mirror pools with the given size
mklvcopy -p copy2=PoolB bblv 2                        <--creates a 2nd copy of an lv to the given mirror pool
mirrorvg -p copy2=MPoolB -c 2 bbvg                    <--mirrors the whole vg to the given mirror pool

------------------------------------------------------

Adding/Removing an lv to/from a Mirror Pool:

root@bb_lpar: / # lsvg -m bbvg                                <--shows lvs of a vg with mirror pools
Logical Volume    Copy 1            Copy 2            Copy 3
bblv              None              None              None

root@bb_lpar: / # chlv -m copy1=PoolA bblv                    <--enables mirror pools to the given copy of an lv
root@bb_lpar: / # chlv -m copy2=PoolB bblv

root@bb_lpar: / # lsvg -m bbvg                                <--checking again the layout
Logical Volume    Copy 1            Copy 2            Copy 3
bblv              PoolA             PoolB             None

root@bb_lpar: / # chlv -M 1 bb1lv                             <--disables mirror poools of the given copy for the lv

root@bb_lpar: / # lsvg -m bbvg                                <--checking again the layout
Logical Volume    Copy 1            Copy 2            Copy 3
bb1lv             None              PoolB             None

------------------------------------------------------

Viewing Mirror Pools:

lsmp bbvg                                  <--lists the mirror pools of the given vg

lspv hdisk6                                <--shows PV characteristic (at the last line shows mirror pool the pv belongs to)
lspv -P                                    <--shows all PVs in the system (with mirror pools)

lsvg -P bbvg                               <--shows the PVs of a VG (with mirror pools)
lsvg -m bbvg                               <--shows the LVs of a VG (with mirror pools)

lslv bblv                                  <--shows LV characteristics (at the end shows lv copies and the mirror) pools)

------------------------------------------------------

Correct steps of creating and removing a mirror pool (totally):

Mirror pool is a separate entity from LVM. (I imagine it as a small database, which keeps rules and strictness, so the underlying LVM commands, based on those rules are successful or not.) It can happen that you remove the 2nd copy of an LV with rmlvcopy (not in LVM anymore), but mirror pool commands will still show it as an existent copy. So make sure LVM commands (mirrorvg, mklvcopy...) and Mirror Pool commands (chpv -p, chlv -m copy1=.., chvg -M....) are in synchron all the time!


Mirror pool informations are stored in 3 places: PV, LV and VG
If you need to create or remove a mirror pool, make sure mirror pool entry is taken care it all 3 places.

Creating mirror pool on a VG which is already mirrored at LVM level:

0. check if mirrors are OK (each copy in separate disk)

1. chpv -p <poolname> <diskname>                <--add disks to the mirror pool
   # lspv hdisk0 | grep MIRROR
   MIRROR POOL:        PoolA


2. chlv -m copy1=PoolA fslv00                   <--add lv to the given pool (add all lvs to both pools: copy1 and copy2)
   # lslv fslv00 | grep MIRROR
   COPY 1 MIRROR POOL: PoolA
   COPY 2 MIRROR POOL: PoolB
   COPY 3 MIRROR POOL: None

3. chvg -M <strictness> <vgname>                 <--set strictness for the VG (usually chvg -M s ...)
   # lsvg testvg | grep MIRROR   
   MIRROR POOL STRICT: super

------------------------------------------------------

Removing mirror pool from a system:

1. chvg -M n <vgname>                            <--turn off strictness
   # lsvg testvg | grep MIRROR
   MIRROR POOL STRICT: off

2. chlv -M 2 <lvname>                            <--remove 2nd copy of the LV from mirror pool (remove 1st copy as well: chlv -M 1...)
   # lslv fslv00 | grep MIRROR
   COPY 1 MIRROR POOL: PoolA
   COPY 2 MIRROR POOL: None
   COPY 3 MIRROR POOL: None

If every mirror pool is removed from LV level, only then!:
3. chpv -P <diskname>                            <--remove disk from mirror pool (do it with all disks)
   # lspv hdiskpower0| grep MIRROR
   MIRROR POOL:        None

4. check with lsvg -m <vgname>

------------------------------------------------------

If you remove mirror pool from a disk, but it still exist on LV level (step 2 and 3 are not in correct order), you will get this:

# chpv -P hdiskpower0
0516-1010 chpv: Warning, the physical volume hdiskpower0 has open 
      logical volumes.  Continuing with change.
0516-1812 lchangepv: Warning, existing allocation violates mirror pools. 

     Consider reorganizing the logical volume to bring it into compliance.

# lsvg -m testvg
Logical Volume    Copy 1            Copy 2            Copy 3
loglv00           None              None              None
fslv00                              None              None        <--it will show incorrect data (Pool was not deleted at LV level)
fslv01            None              None              None

# chlv -M 1 fslv00                                                <--remove pool from LV level (copy 1)

# lsvg -m testvg                                                  <--it will show correct info
Logical Volume    Copy 1            Copy 2            Copy 3
loglv00           None              None              None
fslv00            None              None              None
fslv01            None              None              None

------------------------------------------------------

Changing from one Mirror Pool to another:

If you have a good working system with mirror pool (A and B) and requested to remove disks from pool A and assign new disks from Pool C:

My suggestion:
1. remove mirror pools totally from the system: from VG, LV and PV level
2. remove unnecessary mirror at LVM level (unmirrorvg from the disks of Pool A)
3. delete disks on the system (from Pool A) and assign new disks to the system (Pool C)
4. create LVM mirror to the new disks on Pool C (mirrorvg)
5. create new mirror pools, Pool A and C (PV, LV and VG level)

 ------------------------------------------------------
 
0516-622 extendlv: Warning, cannot write lv control block data.
0516-1812 lchangelv: Warning, existing allocation violates mirror pools.

Consider reorganizing the logical volume to bring it into compliance.


This can come up when you want to increase fs (or lv), but the lv layout on the disks is not following fully the mirror pool restrictions. (For example there is an lp which exists on a disk in one pool, but it should reside in the other pool.)

The reorgvg command can solve this (it can run for a long time):
reorgvg <vg name> <lv name>

 Sometimes reorgvg can't solve it and you have to  manually find where is the problem:

1. check lv - mirror pool distribution:

root@aixdb2: /root # lsvg -m P_NAVISvg
Logical Volume    Copy 1            Copy 2            Copy 3
p_admlv           VMAX_02           VMAX_03           None
p_datlv           VMAX_02           VMAX_03           None
p_archlv          VMAX_02           VMAX_03           None
...

As you see all of the 1st copy belongs to VMAX_02 and the 2nd copy to VMAX_03


2. check disk - mirror pool distribution

root@aixdb2: /root # lspv -P
Physical Volume   Volume Group      Mirror Pool
hdiskpower1       P_NAVISvg         VMAX_03    <--it should contain only 2nd copy of lvs
hdiskpower2       P_NAVISvg         VMAX_03    <--it should contain only 2nd copy of lvs
...
hdiskpower18      P_NAVISvg         VMAX_02    <--it should contain only 1st copy of lvs
hdiskpower19      P_NAVISvg         VMAX_02    <--it should contain only 1st copy of lvs
hdiskpower20      P_NAVISvg         VMAX_02    <--it should contain only 1st copy of lvs


3. check lv - disk distribution


From the output of lsvg -M <vg name>, you can see the 1st and 2nd copy of an lv resides on which disk.
After that you can check if that disk belongs to the correct mirror pool or not.

this will sort the disks with lvs on it and show which copy (1st or 2nd) is there:
root@aixdbp2: /root # lsvg -M P_NAVISvg | awk -F: '{print $1,$2,$4}'| awk '{print $1,$3,$4}'| sort -u | sort -tr +1 -n

P_NAVISvg:
hdiskpower18 t_datlv 1
hdiskpower18 t_oralv 1
hdiskpower19 p_datlv 2    <--2nd copy of p_datlv resides on hdiskpower19, but hdiskpower19 should contain only 1st copy
hdiskpower19 p_oralv 1
hdiskpower19 t_archlv 1

(the above command: lsvg -M...sort -tr +1 -n, was written for hdiskpower disks (-tr:delimeter is 'r'))
(if you have only hdisk, you can change it to lsvg -M...sort -tk +1 -n, or if you omit this sort, the command should work as well)


4. migrating the wrong lps to a correct disk

checking lps of an lv:
root@aixdb2: /root # lspv -M hdiskpower19 | grep p_datlv
hdiskpower19:889        p_datlv:9968:2
hdiskpower19:890        p_datlv:9969:2
hdiskpower19:891        p_datlv:9970:2

After finding the correct disk with free pps (e.g. this will show you the freepps: lspv -M <disk>):
root@aixdb2: /root # migratelp p_datlv/9968/2 hdiskpower2/329

(Sometimes for migratelp not enough to give diskname only (e.g. hdiskpower2), pp number is needed as well (e.g. hdiskpower2/329))

AIX LVM


LVM (Logical Volume Manager):
LVM manages the storage to have a structured overview of it.

/var/adm/ras/lvmcfg.log        lvm log file shows what lvm commands were used (alog -ot lvmcfg)
alog -ot lvmt                  shows lvm commands and libs

The LVM consists of:
    -high level commands: can be used by users, e.g.: mklv (this can call an intermediate level command)
    -intermediate level commands: these are used by high-level commands, e.g. lcreatelv (users should not use these)
    -LVM subroutine interface library: it contains routines used by commands, e.g. lvm_createlv
    -Logical Volume Device Driver (LVDD): manages and processes all I/O; it is called by jfs or lvm library routines
    -Disk Device Driver: It is called by LVDD
    -Adapter Device Driver: it provides an interface to the physical disk

This shows how the execution of a high level command goes through the different layers of LVM:


LOGICAL VOLUME

After you create a volume group, you can create logical volumes within that volume group. Logical partitions and logical volumes make up the logical view. Logical partitions map to and are identical in size to the physical partitions. A physical partition is the smallest unit of allocation of disk where the data is actually stored. A logical volume is a group of one or more logical partitions that can span multiple physical volumes. All the physical volumes it spans must be in the same volume group.

A logical volume consists of a sequence of one or more logical partititons. Each logical partition has at least one and a maximum of three corresponding physical partitions that can be located on different physical volumes.

When you first define a logical volume, the characteristics of its state (LV STATE) will be closed. It will become open when, for example, a file system has been created in the logical volume and mounted.
It is also possible that you might want to create a logical volume and put nothing on it. This is known as a raw logical volume. Databases frequently use raw devices

Logical Volume types:
    - log logical volume: used by jfs/jfs2
    - dump logical volume: used by system dump, to copy selected areas of kernel data when a unexpected syszem halt occurs
    - boot logical volume: contains the initial information required to start the system
    - paging logical volume: used by the virtual memory manager to swap out pages of memory

users and appl.-s will use these lvs:
    - raw logical volumes: these will be controlled by the appl. (it will nit use jfs/jfs2)
    - journaled filesystems:


Striped logical volumes:
Striping is a technique spreading the data in a logical volume across several physical volumes in such a way that the I/O capacity of the physical volumes can be used in parallel to access the data.


LVCB (Logical Volume Control Block)
First 512 byte of each logical volume in normal VGs (In big VGs it moved partially into the VGDA, and for scalable VGs completely.)(traditionally it was the fs boot block) The LVCB stores the attributes of the LV. Jfs does not access this area.
# getlvcb -AT <lvname>                                <--shows the LVCB of the lv

--------------------

LOGICAL VOLUME:     hd2                    VOLUME GROUP:   rootvg
LV IDENTIFIER:      0051f2ba00004c00000000f91d51e08b.5 PERMISSION:     read/write
VG STATE:           active/complete        LV STATE:       opened/syncd
TYPE:               jfs                    WRITE VERIFY:   off
MAX LPs:            512                    PP SIZE:        32 megabyte(s)
COPIES:             2                      SCHED POLICY:   parallel
LPs:                73                     PPs:            146
STALE PPs:          0                      BB POLICY:      relocatable
INTER-POLICY:       minimum                RELOCATABLE:    yes
INTRA-POLICY:       center                 UPPER BOUND:    32
MOUNT POINT:        /usr                   LABEL:          /usr
MIRROR WRITE CONSISTENCY: on/ACTIVE
EACH LP COPY ON A SEPARATE PV ?: yes
Serialize IO ?:     NO


inter-policy    inter-physical volume allocation policy, can be minimum or maximum
                minimum: to allocate pp's the minimum pv will be used (not spreading to all pv's tha data if possible)
                maximum: to spread the physical partitions of this logical volume over as many physical volumes as possible.

This illustration shows 2 physical volumes. One contains partition 1 and a copy of partition 2. The other contains partition 2 with a copy of partition 1. The formula for allocation is Maximum Inter-Disk Policy (Range=maximum) with a Single Logical Volume Copy per Disk (Strict=y).


each lp copy on separate pv    The strictness value. Current state of allocation, strict, nonstrict, or superstrict. A strict allocation states that no copies for a logical partition are allocated on the same physical volume. If the allocation does not follow the strict criteria, it is called nonstrict. A nonstrict allocation states that copies of a logical partition can share the same physical volume. A superstrict allocation states that no partition from one mirror copy may reside the same disk as another mirror copy. (mirror 2 and mirror 3 cannot be on the sam edisk)

(So inter-policy and strictness have effect together how many disks are used: spreading to maximum disks (1st lps) then mirroring them we need another bunch of disks; however spreading to minimum disks and mirroring, we need less disks.)


intra-policy    intra-physical volume allocation policy, it specifies what startegy should be used for choosing pp's on a pv.
                it can be: edge (outer edge), middle (outer middle), center, inner middle, inner edge


If you specify a region, but it gets full, further partitions are allocated from near as possible to far away.
The more i/o-s used, the pp's should be allocate to the outer edge.

mirror write consistency If turned on LVM keeps additional information to allow recovery of inconsistent mirrors.
                  Mirror write consistency recovery should be performed for most mirrored logical volumes
                  MWC is necessary to mirror lvs with parallel scheduling policies.

sched policy      how reads and writes are handled to mirrorred logical volumes
                  parallel (default): read from least busy disk, write to all copies concurrently (at the same time)
                  sequential: read from primary copy only (if not available then next copy). write sequential (one after another)
                  (1 book suggests sequential because it works with MWC)

Write verify      If turned on, all writes will be verified with a follow-up read. This will negatively impact performace but useful.

BB policy         Bad block relocation policy. (bad blocks are relocatable or not)

Relocatable       Indicates whether the partitions can be relocated if a reorganization of partition allocation takes place.

Upper Bound
       what is the maximum number of physical volumes a logical volume can use for allocation

------------------

# lslv -l pdwhdatlv

PV                COPIES        IN BAND       DISTRIBUTION
hdiskpower5       125:000:000   3%            000:004:000:076:045

Copies            shows information of each copies (separated by :) on the disks (125 first copy and no other mirrors are on the disk)

In Band           the percentage of pps on the disk which were allocated within the region specified by Intra-physical allocation policy

Distribution      how many pps are allocated in: outer edge, outer middle, center, inner middle, and inner edge (125=4+76+45)

------------------


lslv lvname       displays information about the logical volume
lslv -m lvname    displays the logical partitions (LP) and their corresponding physical partititons (PP)
lslv -l lvname    displays on which physical volumes is the lv resides
lslv -p <hdisk>   displays the logical volume allocation map for the disk (shows used, free, stale for each physical partition)
lslv -p <hdisk> <lv> displays the same as above, just the given lv's partitions will be showed by numbers

    Open          Indicates active if LV contains a file system   
    Closed        Indicates inactive if LV contains a file system   
    Syncd         Indicates that all copies are identical   
    Stale         Indicates that copies are not identical   


mklv -y newlv1 datavg 1    create logical volumes (mklv -y'testlv' -t'jfs' rootvg 100 <--creates jfs with 100 lp)
    -y newlv1     name of the lv
    datavg        in which vg the lv will reside
    1             how many logical partitions add to the lv

mklv -t jfs2log -y <lvname> <vgname> 1 <pvname> creates a jfs2log lv (after creation format it: logform -V jfs2 <loglvname>)

rmlv              removes a logical volume
rmlv -f loglv     removes without confirmation

mklvcopy bblv 2 hdisk2    make a 2nd copy (1LP=2PP) of bblv to hdisk2 (synchronization will be needed: syncvg -p hdisk2 hdisk3)
rmlvcopy bblv 1 hdisk3    leave 1 copy (1LP=1PP) only and remove those from hdisk3

getlvcb           display the LVCB (Logical Volume Control Block) of a logical volume
extendlv          increasing the size of a logical volume
cplv              copying a logical volume
chlv              changes the characteristic of a logical volume

migratelp testlv/1/2 hdisk5/123 migrates testlv's data from the 1st lp's second copy to hdisk5 on pp 123
                 (output of lspv -M hdiskx can be used:lvname:lpnumber:copy, this sequence is needed)
                 (if it is not mirrorred than easier this way: migratelp testlv/1 hdisk3)
                 (if it is mirrorres and we use the above commande, than 1st copy will be used: testlv/1/1...)

migratelp in for cycle:
for i in $(lslv -m p1db2lv | grep hdiskpower11 | tail -50 | cut -c 2-4); do migratelp p1db2lv/$i hdiskpower3; done

lresynclv        resync a logical volume (???maybe if mirrorred???

------------------

Creating a new log logical volume:


1. mklv -t jfs2log -y lvname vgname 1 pvname        <-- creates the log lv
2. logform -V jfs2 /dev/lvname
3. chfs -a log=/dev/lvname /fsname                  <--changes the log lv (it can be checked in /etc/filesystems)

------------------

Resynchronizing a logical volume:

1. root@aix16: / # lslv hd6 | grep IDENTIFIER
LV IDENTIFIER:      00c2a5b400004c0000000128f907d534.2

2. lresynclv -l 00c2a5b400004c0000000128f907d534.2

------------------

Striped lv extending problems:

extending is only possible by the stripe width (if it is 2, the extended lp should be 2,4,6...)
if lv can't be extended upper bound can cause this:

lslv P02ctmbackuplv | grep UPPER
UPPER BOUND:    2

It means that the lv can only be on 2 disks, but if on those 2 disks has no more space it can't be extebded to other disks.
upper bound should be changed: chlv -u 4 P02ctmbackuplv


After this extension should be possible
.
------------------

Unable to find lv in the define configuration database

1. synclvodm <vgname>         <-- rebuild the volume group descriptors on the physical volume. Enter:
2. rmlv <lvname>              <-- remove the unwanted logical volume.

------------------

Migrating PPs between disks:


checking the PPs of test1lv:
lslv -m test1lv
test1lv:/home/test1fs
LP    PP1  PV1               PP2  PV2               PP3  PV3
0001  0001 hdisk6
0002  0002 hdisk6
0003  0003 hdisk6
...
0057  0057 hdisk6
0058  0058 hdisk6
0059  0059 hdisk6

the command: migratelp test1lv/59 hdisk7
(it wil migrate LP #59 to hdisk7)

in a for cycle:
for i in $(lslv -m shadowlv | grep hdisk1 | tail -10 | cut -c 2-4); do
migratelp shadowlv/${i} hdisk0
done


------------------

Once had a problem with an lv and its mirror copies:

root@bb_lpar: / # lsvg -l bbvg
bbvg:
LV NAME             TYPE       LPs     PPs     PVs  LV STATE      MOUNT POINT
0516-1147 : Warning - logical volume bblv may be partially mirrored.
bblv                jfs2       16      20      3    closed/syncd  /bb


root@bb_lpar: / # mirrorvg bbvg
0516-1509 mklvcopy: VGDA corruption: physical partition info for this LV is invalid.
0516-842 mklvcopy: Unable to make logical partition copies for

        logical volume.
0516-1199 mirrorvg: Failed to create logical partition copies
        for logical volume bblv.
0516-1200 mirrorvg: Failed to mirror the volume group.


root@bb_lpar: / # lslv -l bblv
0516-1939 : PV identifier not found in VGDA.


root@bb_lpar: / # rmlvcopy bblv 1 hdisk2
0516-1939 lquerypv: PV identifier not found in VGDA.
0516-304 getlvodm: Unable to find device id 0000000000000000 in the Device

        Configuration Database.
0516-848 rmlvcopy: Failure on physical volume 0000000000000000, it may be missing
        or removed.



The partial mirrored lps caused a big mess in VGDA and LVM, so the solution was the removal of these lps with a low-level command: lreducelv

1. checking the problematic lps:
root@bb_lpar: / # lslv -m bblv
bblv:/bb
LP    PP1  PV1               PP2  PV2               PP3  PV3
0001  0008 hdisk2
0002  0009 hdisk2
0003  0010 hdisk2
0004  0011 hdisk2
0005  0012 hdisk2
0006  0013 hdisk2
0007  0014 hdisk2
0008  0015 hdisk2
0009  0008 hdisk3            0016 hdisk2
0010  0009 hdisk3            0017 hdisk2
0011  0010 hdisk3            0018 hdisk2
0012  0012 hdisk3            0019 hdisk2
0013  0001 hdisk2
0014  0002 hdisk2
0015  0003 hdisk2
0016  0004 hdisk2


2. creating a text file with these wrong lps which will be used by lreducelv:
1st column: PVID of the disk with wrong lps (lspv hdisk2: 00080e82dfab25bc)
2nd column: PP# of the wrong lps (lslv -m bblv: PP2 column)
3rd column: LP# of the wrong lps (lslv -m bblv: LP column)

root@bb_lpar: / # vi partial_mir.txt
00080e82dfab25bc 0016 0009
00080e82dfab25bc 0017 0010
00080e82dfab25bc 0018 0011
00080e82dfab25bc 0019 0012


3. removing the partial mirror copies:
lreducelv -l <LV ID> -s <NUMBER of LPs> <TEXT FILE>

LV ID: 00080e820000d900000001334c11e0de.1 (lslv bblv)
NUMBER of LPs: 4 (wc -l partial_mir.txt)
TEXT FILE: partial_mir.txt

root@bb_lpar: / # lreducelv -l 00080e820000d900000001334c11e0de.1 -s 4 partial_mir.txt

Now the lvm deallocates all PP's of your partially mirror.


4. After these, lslv -m will show correct output, but LVCB or VGDA could still show we have 2 copies
root@bb_lpar: /tmp/bb # odmget -q name=bblv CuAt | grep -p copies

CuAt:
        name = "bblv"
        attribute = "copies"
        value = "2"
        type = "R"
        generic = "DU"

(We can see this paragraph only if there is mirroring, otherwise there will be no output of odmget command)


root@bb_lpar: /tmp/bb # getlvcb -AT bblv
         AIX LVCB
         intrapolicy = m
         copies = 1

(odmget shows we have 2 copies and getlvcb shows we have only 1 copy.)

Probably it is safer if we update both with the correct value:
putlvodm -c <COPYNUM> <LV ID>
putlvcb -c <COPYNUM> <LV NAME>

COPYNUM: 1
LV ID: 00080e820000d900000001334c11e0de.1 (lslv bblv)

root@bb_lpar: /tmp/bb # putlvodm -c 1 00080e820000d900000001334c11e0de.1
root@bb_lpar: /tmp/bb # putlvcb -c 1 bblv

source of this solution: http://archive.rootvg.net/cgi-bin/anyboard.cgi/aix?cmd=get&cG=73337333&zu=37333733&v=2&gV=0&p=

------------------