This post is also available in:
Series: Oracle RAC 12.2 — Complete Installation on VMware Workstation
📌 About this series: This guide covers Oracle RAC 12c, which is out of support. The goal is to document learning and revisit concepts – not to recommend this version for production. Read the full context in the series overview →
📋 This series — Oracle RAC 12.2 on VMware Workstation:
⚙️ Prerequisite: Post 2 completed – Oracle Linux configured on both nodes, iSCSI connected, and SSH equivalence validated.
In Post 2 we configured Oracle Linux on both nodes and validated the environment. Now we’ll install Grid Infrastructure 12.2 – the component that turns two independent servers into an Oracle cluster.
The entire installation is performed from orclrac1.
💡 In practice: Grid is the most sensitive part of the entire RAC installation. Most failures don’t happen during the installation itself – they happen because of misconfigured prerequisites: wrong UDEV rules, SCAN not resolving, GIMR ignored. This post documents every error hit in the lab.
Preparation
Configure UDEV for ASM Disks
Run as root on both nodes:
cat > /etc/udev/rules.d/99-oracle-asmdevices.rules << 'EOF'
KERNEL=="sdc", SUBSYSTEM=="block", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sdd", SUBSYSTEM=="block", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sde", SUBSYSTEM=="block", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sdf", SUBSYSTEM=="block", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sdg", SUBSYSTEM=="block", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sdh", SUBSYSTEM=="block", OWNER="oracle", GROUP="dba", MODE="0660"
EOF
udevadm control --reload-rules && udevadm trigger
ls -la /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdhPrepare the Grid Installer
cd /u01/app/12.2.0/grid
unzip -q /u01/stage/linuxx64_12201_grid_home.ziporclrac2 does not need to unzip – the OUI copies binaries automatically over SSH.
Install cvuqdisk on Both Nodes
rpm -ivh /u01/app/12.2.0/grid/cv/rpm/cvuqdisk-1.0.10-1.rpm
scp /u01/app/12.2.0/grid/cv/rpm/cvuqdisk-1.0.10-1.rpm root@orclrac2:/tmp/
rpm -ivh /tmp/cvuqdisk-1.0.10-1.rpmConfigure Local DNS (dnsmasq)
The OUI validates SCAN via nslookup, which doesn’t read /etc/hosts. Without DNS, the DNS/NIS check fails.
cat > /etc/resolv.conf << 'EOF'
search oracle.local
nameserver 8.8.8.8
EOF
yum install -y dnsmasq
cat > /etc/dnsmasq.conf << 'EOF'
no-resolv
no-poll
listen-address=127.0.0.1,192.168.15.170
bind-interfaces
domain=oracle.local
local=/oracle.local/
EOF
systemctl enable dnsmasq && systemctl start dnsmasq
cat > /etc/resolv.conf << 'EOF'
search oracle.local
nameserver 192.168.15.170
EOF
chattr +i /etc/resolv.confOn orclrac2:
cat > /etc/resolv.conf << 'EOF'
search oracle.local
nameserver 192.168.15.170
EOF
chattr +i /etc/resolv.confValidate:
nslookup orclrac-scanGrid Infrastructure Installation
ssh -X oracle@192.168.15.170
yum install -y xorg-x11-utils xorg-x11-xauth xterm
cd /u01/app/12.2.0/grid
./gridSetup.shAll 19 OUI Screens
Step 1 — Configuration Option: Configure Oracle Grid Infrastructure for a New Cluster
Step 2 — Cluster Configuration: Configure an Oracle Standalone Cluster
Step 3 — Grid Plug and Play:
- Cluster Name:
orclrac-cluster - SCAN Name:
orclrac-scan/ SCAN Port:1521/ GNS: unchecked
Step 4 — Cluster Node Information — Click “Add”:
| Field | Value |
|---|---|
| Public Hostname | orclrac2 |
| Node Role | HUB |
| Virtual Hostname | orclrac2-vip |
Click “SSH Connectivity”, enter the oracle password → “Setup”. Both nodes should return “Succeeded”.
Step 5 — Network Interface Usage:
| Interface | Type |
|---|---|
| eth0 | Public |
| eth1 | ASM & Private |
| eth2 | Do Not Use |
Step 6 — Storage Option: Configure ASM using block devices
Step 7 — Grid Infrastructure Management Repository: Yes
Selecting Yes is mandatory. With No, the OUI returns
[INS-30515]even with External redundancy, because Oracle reserves additional space in the OCR disk group for internal CRS data.
Step 8 — Create ASM Disk Group (OCR):
| Field | Value |
|---|---|
| Disk group name | OCR |
| Redundancy | External |
| Allocation Unit Size | 4 MB |
| Disks | /dev/sdc, /dev/sdd, /dev/sde |
| ASM Filter Driver | unchecked |
Step 9 — Create ASM Disk Group (MGMT — GIMR):
| Field | Value |
|---|---|
| Disk group name | MGMT |
| Redundancy | External |
| Disks | /dev/sdf |
Step 10 — ASM Password: Welcome1
Step 11 — Failure Isolation: Do not use IPMI (virtual environment, no IPMI hardware)
Step 12 — Management Options: EM Cloud Control unchecked
Step 13 — Privileged OS Groups:
| Group | Value |
|---|---|
| OSASM | dba |
| OSDBA for ASM | asmdba |
| OSOPER for ASM | asmoper |
Step 14 — Installation Location:
| Field | Value |
|---|---|
| Oracle Base | /u01/app/oracle |
| Software Location | /u01/app/12.2.0/grid |
Warning
[INS-40109]is expected — click Yes.
Step 15 — Create Inventory: /u01/app/oraInventory / group oinstall
Step 16 — Root Script Execution: Automatically run configuration scripts + root password
Step 17 — Prerequisite Checks – see Known Errors section
Step 18 — Summary → Install
Step 19 — Install Product / Finish – wait for completion
Create DATA and FRA Disk Groups
Via sqlplus
su - oracle
grid_env
sqlplus / as sysasmCREATE DISKGROUP DATA EXTERNAL REDUNDANCY
DISK '/dev/sdg' NAME DATA1
ATTRIBUTE 'compatible.asm'='12.2','compatible.rdbms'='12.2';
CREATE DISKGROUP FRA EXTERNAL REDUNDANCY
DISK '/dev/sdh' NAME FRA1
ATTRIBUTE 'compatible.asm'='12.2','compatible.rdbms'='12.2';
SELECT NAME, STATE, TOTAL_MB, FREE_MB FROM V$ASM_DISKGROUP;Via asmca (GUI)
su - oracle
grid_env
asmcaRight-click Disk Groups → Create → create DATA (/dev/sdg) and FRA (/dev/sdh), both with External redundancy.
Verify Grid
crsctl stat res -t
olsnodes -n -i -s
crsctl query css votedisk
ocrcheckKnown Errors
Swap Size — Warning
Symptom: Swap Size appears as Warning on the Prerequisite Checks screen.
Cause: Oracle recommends at least 8 GB of swap for VMs with 8 GB of RAM. The OL7 automatic partitioning creates a smaller swap.
Fix: This is only a Warning — it does not block the installation. Check Ignore All and proceed.
Network Time Protocol (NTP) / chrony — Failed
Symptom: Network Time Protocol (NTP) and chrony daemon is synchronized with at least one external time appear as Failed.
Cause: Chrony cannot synchronize with an external source due to no internet access in the lab. The OUI validates synchronization with an external NTP server, which doesn’t exist in this environment.
Fix: This does not affect cluster functionality in a lab. Check Ignore All and proceed.
[INS-30515] Insufficient space available in the selected disks
Cause 1: NORMAL redundancy with 10 GB disks. Fix: Use External redundancy.
Cause 2: The /dev/sdf disk (MGMT/GIMR) has less space than Oracle requires (minimum ~38 GB). The OUI shows the exact minimum size in the error message. Fix: Make sure lv-gimr was created with at least 40960 MB in Openfiler before starting the Grid installation.
INS-41808 / INS-30064 — Missing ASM groups on orclrac2
Symptom: user "oracle" does not belong to the OS group "asmoper" on remote nodes
Cause: VM cloned before ASM groups were created.
groupadd -g 54327 asmdba
groupadd -g 54328 asmoper
groupadd -g 54329 asmadmin
usermod -a -G asmadmin,asmoper,asmdba oracleDevice Checks for ASM — Group mismatch
Symptom: Group of device "/dev/sdc" did not match. [Expected = "dba"; Found = "asmadmin"]
sed -i 's/GROUP="asmadmin"/GROUP="dba"/' \
/etc/udev/rules.d/99-oracle-asmdevices.rules
udevadm control --reload-rules && udevadm triggerDNS/NIS — SCAN failed to resolve
Cause: OUI uses nslookup, not /etc/hosts. Fix: Configure dnsmasq as described. If check persists, click “Ignore All” — cluster will work correctly.
CRS-1705 — Found 1 voting file but 2 required (after restarting VMs)
Cause: iSCSI reconnected through both interfaces on startup, duplicating the disks.
iscsiadm -m node \
-T iqn.2006-01.com.openfiler:rac-storage \
-p 192.168.15.175 --logout
iscsiadm -m node \
-T iqn.2006-01.com.openfiler:rac-storage \
-p 192.168.15.175 --op delete
crsctl stop crs -f
sleep 10
crsctl start crs
sleep 30
crsctl check crsORA-15030 — diskgroup name is in use
Cause: Disk group creation failed partway. Fix: Reboot both servers — ASM clears inconsistent state on boot.
Next Up
In Post 4 we’ll install Oracle Database software and create the RAC database with DBCA.
