2011-04-29

how to ADD/REMOVE/REPLACE/MOVE ocr and voting disks

Document TitleOCR / Vote disk Maintenance Operations: (ADD/REMOVE/REPLACE/MOVE) (Doc ID 428681.1)

Applies to:

Oracle Server - Enterprise Edition - Version: 10.2.0.1 to 11.2.0.1.0 - Release: 10.2 to 11.2
Information in this document applies to any platform.
Oracle Server Enterprise Edition - Version: 10.2.0.1 to 10.2.0.4

Goal

The goal of this note is to provide steps to add, remove, replace or move an Oracle Cluster Repository (OCR) or voting disk in Oracle Clusterware 10g Realese 2 (10.2.0.1 and later) environments. It will also provide steps to move OCR / voting and ASM devices from raw device to block device.

This article is intended for DBA’s and Support Engineers who need to modify, or move OCR and voting disks files, customers who have an existing clustered environment deployed on a storage array and might want to migrate to a new storage array with minimal downtime.

Typically, one would simply cp or dd the files once the new storage has been presented to the hosts. In this case, it is a little more difficult because:

1. The Oracle Clusterware has the OCR and voting disks open and is actively using them. (Both primary and mirrors)2. There is a “cluster API” provided for this function (ocrconfig, and crsctl), which is the appropriate interface than typical cp and/or dd commands.

It is highly recommended to take a backup of the voting disk, and OCR device before making any changes.

Oracle Cluster Registry (OCR) and Voting Disk Additional clarifications

The following steps assume the cluster is setup using Oracle redundancy with 3 voting disks and 2 OCR.


Solution

ADD/REMOVE/REPLACE/MOVE OCR Device
Note: You must be logged in as the root user, because root owns the OCR files. Also an ocrmirror must be in place before trying to replace the OCR device. The ocrconfig –replace will fail with prot-16 or prot-1, if there is not an ocrmirror. If an OCR device is replaced with a device of a different size, the size of the new device will not be reflected until the clusterware is restarted.


Make sure there is a recent copy of the OCR file before making any changes:

ocrconfig –showbackup
If there is not a recent backup copy of the OCR file, an export can be taken for the current OCR file. Use the following command to generate an export of the online OCR file:

In 10.2

# ocrconfig –export <OCR export_filename> -s online
In 11.1

# ocrconfig -manualbackup
node1 2008/08/06 06:11:58 /crs/cdata/crs/backup_20080807_003158.ocr

If you should need to recover using this file, the following command can be used:

ocrconfig -import <OCR export_filename>


To see whether your OCR's are in sync and healthy run an ocrcheck, which should return with “succeeded“, like below.

# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 2
Total space (kbytes) : 497928
Used space (kbytes) : 312
Available space (kbytes) : 497616
ID : 576761409
Device/File Name : /dev/raw/raw1
Device/File integrity check succeeded
Device/File Name : /dev/raw/raw2
Device/File integrity check succeeded

Cluster registry integrity check succeeded
1. To add an OCR device:

To add an OCR device, provide the full path including file name.

ocrconfig -replace ocr <filename>
To add an OCR mirror device, provide the full path including file name.

ocrconfig -replace ocrmirror <filename>
2. To remove an OCR device:

To remove an OCR device:

ocrconfig -replace ocr
To remove an OCR mirror device

ocrconfig -replace ocrmirror
3. To replace or move the location of an OCR device:

To replace the OCR device with <filename>, provide the full path including file name.

ocrconfig -replace ocr <filename>
To replace the OCR mirror device with <filename>, provide the full path including file name.

ocrconfig -replace ocrmirror <filename>
Example Moving OCR from Raw Device to Block Device

The OCR disk must be owned by root, must be in the oinstall group, and must have permissions set to 640. Provide at least 100 MB disk space for the OCR.

In this example the OCR files will be on the following devices:

/dev/raw/raw1
/dev/raw/raw2
For moving the OCR (Oracle Cluster Registry) from raw device to block device there are two different ways. One, which requires a full cluster outage, and one with no outage. The offline method is recommended for 10.2 and earlier since a cluster outage is required anyways due to an Oracle bug, which prevents online addition and deletion of voting files. This bug is fixed in 11.1, so either online or offline method can be employed in 11.1 onwards.

Method 1 (Online)

If there are additional block devices of same or larger size available, one can perform 'ocrconfig -replace'.

PROS: No cluster outage required. Run 2 commands and changes are reflected across the entire cluster.

CONS: Need temporary additional block devices with 256MB in size. One can reclaim the storage pointed by the raw devices when the operation completes.

· On one node as root run:

# ocrconfig -replace ocr /dev/sdb1
# ocrconfig -replace ocrmirror /dev/sdc1
For every ocrconfig or ocrcheck command a trace file to $CRS_Home/log/<hostname>/client directory is written. Below an example from the successful ocrconfig -replace ocr command.


Oracle Database 10g CRS Release 10.2.0.4.0 Production Copyright 1996, 2008 Oracle. All rights reserved.
2008-08-06 07:07:10.424: [ OCRCONF][3086866112]ocrconfig starts...
2008-08-06 07:07:11.328: [ OCRCONF][3086866112]Successfully replaced OCR and set block 0
2008-08-06 07:07:11.328: [ OCRCONF][3086866112]Exiting [status=success]...


Now run ocrcheck to verify if the OCR is pointing to the block device and no error is returned.
Status of Oracle Cluster Registry is as follows :

Version : 2
Total space (kbytes) : 497776
Used space (kbytes) : 3844
Available space (kbytes) : 493932
ID : 576761409
Device/File Name : /dev/sdb1
Device/File integrity check succeeded
Device/File Name : /dev/sdc2
Device/File integrity check succeeded

Cluster registry integrity check succeeded
Method 2 (Offline)

In place method when additional storage is not available, but this requires cluster downtime.

Below the existing mapping from the raw bindings to the block devices, is defined in /etc/sysconfig/rawdevices

/dev/raw/raw1 /dev/sdb1
/dev/raw/raw2 /dev/sdc1

# raw -qa
/dev/raw/raw1: bound to major 8, minor 17
/dev/raw/raw2: bound to major 8, minor 33

# ls –ltra /dev/raw/raw*
crw-r----- 1 root oinstall 162, 1 Jul 24 10:39 /dev/raw/raw1
crw-r----- 1 root oinstall 162, 2 Jul 24 10:39 /dev/raw/raw2

# ls -ltra /dev/*
brw-r----- 1 root oinstall 8, 17 Jul 24 10:39 /dev/sdb1
brw-r----- 1 root oinstall 8, 33 Jul 24 10:39 /dev/sdc1

1. Shutdown Oracle Clusterware on all nodes using "crsctl stop crs" as root.

2. On all nodes run the following commands as root:

# ocrconfig -repair ocr /dev/sdb1
# ocrconfig -repair ocrmirror /dev/sdc1
3. On one node as root run:

# ocrconfig -overwrite
In the $CRS_Home/log/<hostname>/client directory there is a trace file from "ocrconfig -overwrite" like ocrconfig_<pid>.log which should exit with status=success like below:
cat /crs/log/node1/client/ocrconfig_20022.log


Oracle Database 10g CRS Release 10.2.0.4.0 Production Copyright 1996, 2008 Oracle. All rights reserved.
2008-08-06 06:41:29.736: [ OCRCONF][3086866112]ocrconfig starts...
2008-08-06 06:41:31.535: [ OCRCONF][3086866112]Successfully overwrote OCR configuration on disk
2008-08-06 06:41:31.535: [ OCRCONF][3086866112]Exiting [status=success]...
As a verification step run ocrcheck on all nodes and the Device/File Name should reflect the block devices replacing the raw devices:

# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 2
Total space (kbytes) : 497776
Used space (kbytes) : 3844
Available space (kbytes) : 493932
ID : 576761409
Device/File Name : /dev/sdb1
Device/File integrity check succeeded
Device/File Name : /dev/sdc1
Device/File integrity check succeeded

Cluster registry integrity check succeeded
Example of adding an OCR device file

If you have upgraded your environment from a previous version, where you only had one OCR device file, you can use the following step to add an additional OCR file.

In this example a second OCR device file is added:
Add /dev/raw/raw2 as OCR mirror device

ocrconfig -replace ocrmirror /dev/raw/raw2
ADD/DELETE/MOVE Voting Disk

10.2 (all versions)


Note: crsctl votedisk commands must be run as root

Shutdown the Oracle Clusterware (crsctl stop crs as root) on all nodes before making any modification to the voting disk. Determine the current voting disk location using:
crsctl query css votedisk

1. To add a Voting Disk, provide the full path including file name:

crsctl add css votedisk <RAW_LOCATION> -force
2. To delete a Voting Disk, provide the full path including file name:

crsctl delete css votedisk <RAW_LOCATION> -force
3. To move a Voting Disk, provide the full path including file name:

crsctl delete css votedisk <OLD_LOCATION> –force
crsctl add css votedisk <NEW_LOCATION> –force
After modifying the voting disk, start the Oracle Clusterware stack on all nodes

# crsctl start crs
Verify the voting disk location using

crsctl query css votedisk

11.1.0.6 and onwards

Note: crsctl votedisk commands must be run as root

Starting with 11.1 onwards the below commands can be performed online.

1. To add a Voting Disk, provide the full path including file name:


crsctl add css votedisk <RAW_LOCATION>


2. To delete a Voting Disk, provide the full path including file name:



crsctl delete css votedisk <RAW_LOCATION>


3. To move a Voting Disk, provide the full path including file name:


crsctl delete css votedisk <OLD_LOCATION>
crsctl add css votedisk <NEW_LOCATION>


Verify the voting disk location using


crsctl query css votedisk


EXAMPLE MOVING VOTING DISK FROM RAW DEVICE to BLOCK DEVICE

The voting disk is a partition that Oracle Clusterware uses to verify cluster
node membership and status.

The voting disk must be owned by the
oracle user, must be in the dba
group, and must have permissions set to 644. In 10g provide at least 20 MB disk
space for the voting disk. In 11g provide at least 280 MB disk space for the voting disk.

In this example the voting disks will be on the following devices:


/dev/raw/raw4
/dev/raw/raw5
/dev/raw/raw6
Backup Voting before starting any modification.

To determine the configured voting devices run "crsctl query css votedisk"

# crsctl query css votedisk
0. 0 /dev/raw/raw4
1. 0 /dev/raw/raw5
2. 0 /dev/raw/raw6
located 3 votedisk(s).
Backup Voting

Take a backup of all voting disk:

dd if=voting_disk_name of=backup_file_name
For Windows:

ocopy \\.\votedsk1 o:\backup\votedsk1.bak
Note: Use UNIX man pages for additional information on the dd command. The following can be used to restore the voting disk from the backup file created.

# dd if=backup_file_name of=voting_disk_name



Moving Voting Device from RAW Device to Block Device

Moving voting disk from raw to block device in all 10.2 versions require a full cluster downtime.

10.2 (all versions)

1) First run crsctl query css votedisk to determine the already configured one.

# crsctl query css votedisk
0. 0 /dev/raw/raw4
1. 0 /dev/raw/raw5
2. 0 /dev/raw/raw6
located 3 votedisk(s).
2) Shutdown Oracle Clusterware on all nodes using "crsctl stop crs" as root.


Note: For 10g the cluster must be down and for 11.1 this is an online operation and no cluster outage is required.


3) Because we do not allow the removal from all voting disks, there need to be at least one, one spare raw or block device is needed if the existing raw devices should be reused.

Perform the below commands on one node only.

# crsctl delete css votedisk /dev/raw/raw4 -force
# crsctl add css votedisk /dev/vote1 -force
# crsctl delete css votedisk /dev/raw/raw5 -force
# crsctl delete css votedisk /dev/raw/raw6 -force
# crsctl add css votedisk /dev/vote2 -force
# crsctl add css votedisk /dev/vote3 –force

4) Verify with crsctl query css votedisk after the add and delete the configuration.

# crsctl query css votedisk
0. 0 /dev/vote1
1. 0 /dev/vote2
2. 0 /dev/vote3
located 3 votedisk(s).

5) After this the Oracle Clusterware stack can be restarted with "crsctl start crs" as root.
Monitoring the cluster_alert.log in $CRS_Home/log/<hostname>/alertnode1.log the new configured voting disks should be online


2008-08-06 07:41:55.029
[cssd(31750)]CRS-1605:CSSD voting file is online: /dev/vote1. Details in /crs/log/node1/cssd/ocssd.log.
2008-08-06 07:41:55.038
[cssd(31750)]CRS-1605:CSSD voting file is online: /dev/vote2. Details in /crs/log/node1/cssd/ocssd.log.
2008-08-06 07:41:55.058
[cssd(31750)]CRS-1605:CSSD voting file is online: /dev/vote3. Details in /crs/log/node1/cssd/ocssd.log.
[cssd(31750)]CRS-1601:CSSD Reconfiguration complete. Active nodes are node1 node2 .




11.1.0.6 and onwards


Starting with 11.1 onwards the below commands can be performed online.


1) Change voting configuration

# crsctl delete css votedisk /dev/raw/raw4
# crsctl add css votedisk /dev/vote1
# crsctl delete css votedisk /dev/raw/raw5
# crsctl delete css votedisk /dev/raw/raw6
# crsctl add css votedisk /dev/vote2
# crsctl add css votedisk /dev/vote3
2) During the add and delete operations monitor the following file to verify the add / delete was successful.

$CRS_Home/log/<hostname>/alertnode1.log
$CRS_Home/log/<hostname>/cssd/ocssd.log
The cluster_alert.log in $CRS_Home/log/<hostname>/alertnode1.log does print messages like a reconfiguration complete if you delete a voting and CSSD voting file is online if you add a voting:


[cssd(6047)]CRS-1601:CSSD Reconfiguration complete. Active nodes are node1 node2 .
2008-08-06 05:31:28.937
[cssd(6047)]CRS-1605:CSSD voting file is online: /dev/vote1. Details in /crs/log/node1/cssd/ocssd.log.
[cssd(6047)]CRS-1601:CSSD Reconfiguration complete. Active nodes are node1 node2 .
2008-08-06 05:34:46.777
[cssd(6047)]CRS-1605:CSSD voting file is online: /dev/vote2. Details in /crs/log/node1/cssd/ocssd.log.
[cssd(6047)]CRS-1601:CSSD Reconfiguration complete. Active nodes are node1 node2 .
2008-08-06 05:34:52.443
[cssd(6047)]CRS-1605:CSSD voting file is online: /dev/vote3. Details in /crs/log/node1/cssd/ocssd.log.
[cssd(6047)]CRS-1601:CSSD Reconfiguration complete. Active nodes are node1 node2 .


3) Verify with crsctl query css votedisk the configuration after the add and delete command.

# crsctl query css votedisk
0. 0 /dev/vote1
1. 0 /dev/vote2
2. 0 /dev/vote3
located 3 votedisk(s).
How to move ASM devices from raw device to block device

The following is a best practice in how to move ASM devices from raw to block device.

The only change ASM requires is the asm_diskstring modification, point it from eg. /dev/raw/raw* the raw device, to /dev/asm* the block device.

This can be done via init+ASM.ora initialization file and add the "asm_diskstring=/dev/asm*" or online via alter system command alter system set asm_diskstring="/dev/asm" scope=spfile; if using spfile for the ASM instance. Because there are many file pointers open to the raw devices an ASM shutdown / startup need to be performed.

srvctl stop asm -n node1
srvctl stop asm -n node2
modify the init+ASMx.ora on all nodes and add a line like the following:

asm_diskstring='/dev/asm*'

srvctl start asm -n node1
srvctl start asm -n node2
After the modification and the restart connect to the ASM instance and select from v$asm_disk to see the new asm_diskstring reflected.

SQL> select MOUNT_STATUS , NAME, PATH from v$asm_disk

MOUNT_S NAME PATH
------- ------------------------------ ----------
CACHED RODATA_0002 /dev/asm23
CACHED RODATA_0003 /dev/asm24
CACHED RODATA_0004 /dev/asm16
CACHED RODATA_0001 /dev/asm15
CACHED RODATA_0005 /dev/asm13
CACHED RODATA_0000 /dev/asm14
If ASMLIB is used than there is no modification needed because with ASMLIB block devices are used anyway.



Note: Use UNIX man pages for additional information on the dd command. The following can be used to restore the voting disk from the backup file created.

# dd if=backup_file_name of=voting_disk_name

Niciun comentariu:

Trimiteți un comentariu