Oracle RAC集群添加节点

一,节点环境

所有节点分发/etc/hosts,这里我添加两个节点,一个是上次删除的节点,另一个是什么都没有的节点,尝试添加

服务器介绍什么的都在这hosts文件了,大家自己琢磨下

[grid@node1 ~]$ cat /etc/hosts
127.0.0.1  localhost localhost.localdomain localhost4 localhost4.localdomain4
::1        localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.0.194 node2
192.168.0.193 node1
192.168.0.196 node3
192.168.0.183 node1vip
192.168.0.182 node2vip
192.168.0.185 node3vip
172.168.0.196 node3prv
172.168.0.194 node2prv
172.168.0.193 node1prv
192.168.0.176 dbscan
192.168.0.16 standby

二,添加节点(一)---弃用方案

01,验证节点通信

./runSSHSetup.sh -user grid -hosts "node1 node2" -advanced -noPromptPassphrase
./runSSHSetup.sh -user Oracle -hosts "node1 node2" -advanced -noPromptPassphrase

[oracle@node1 bin]$ pwd
/oracle/app/oracle/product/11.2.0/db_1/oui/bin
[oracle@node1 bin]$ ls
addLangs.sh    detachHome.sh        filesList.sh  runConfig.sh    runSSHSetup.sh
addNode.sh    filesList.bat        lsnodes      runInstaller
attachHome.sh  filesList.properties  resource      runInstaller.sh
[oracle@node1 bin]$ ./runSSHSetup.sh -user grid -hosts "node1 node2" -advanced -noPromptPassphrase
This script will setup SSH Equivalence from the host 'node1' to specified remote hosts.

ORACLE_HOME = /oracle/app/oracle/product/11.2.0/db_1
JAR_LOC = /oracle/app/oracle/product/11.2.0/db_1/oui/jlib
SSH_LOC = /oracle/app/oracle/product/11.2.0/db_1/oui/jlib
OUI_LOC = /oracle/app/oracle/product/11.2.0/db_1/oui
JAVA_HOME = /oracle/app/oracle/product/11.2.0/db_1/jdk

Checking if the remote hosts are reachable.
ClusterLogger - log file location: /oracle/app/oracle/product/11.2.0/db_1/oui/bin/Logs/remoteInterfaces2019-03-14_04-52-46-PM.log
Failed Nodes : node1 node2
Remote host reachability check succeeded.
All hosts are reachable. Proceeding further...


NOTE :
As part of the setup procedure, this script will use ssh and scp to copy
files between the local host and the remote hosts. You may be prompted for
the password during the execution of the script.
AS PER SSH REQUIREMENTS, THIS SCRIPT WILL SECURE THE USER HOME DIRECTORY
AND THE .ssh DIRECTORY BY REVOKING GROUP AND WORLD WRITE PRIVILEDGES TO THESE
directories.

Do you want to continue and let the script make the above mentioned changes (yes/no)?
yes

If The files containing the client public and private keys already exist on the local host. The current private key may or may not have a passphrase associated with it. In case you remember the passphrase and do not want to re-run ssh-keygen, type 'no'. If youtype 'yes', the script will remove the old private/public key files and, any previous SSH user setups would be reset.
Enter 'yes', 'no'
yes

Enter the password:
Logfile Location : /oracle/app/oracle/product/11.2.0/db_1/oui/bin/SSHSetup2019-03-14_04-52-54-PM
Checking binaries on remote hosts...
Doing SSHSetup...
Please be patient, this operation might take sometime...Dont press Ctrl+C...
ClusterLogger - log file location: /oracle/app/oracle/product/11.2.0/db_1/oui/bin/Logs/remoteInterfaces2019-03-14_04-52-54-PM.log
Plugin : oracle.sysman.prov.remoteinterfaces.plugins.RemoteCommandSSH found in class path
 Changing Default Plugin from  : oracle.sysman.prov.remoteinterfaces.plugins.RemoteCommandSSH to : oracle.sysman.prov.remoteinterfaces.plugins.RemoteCommandSSH


Local Platform:- Linux

------------------------------------------------------------------------
Verifying SSH setup
===================
The script will now run the date command on the remote nodes using ssh
to verify if ssh is setup correctly. IF THE SETUP IS CORRECTLY SETUP,
THERE SHOULD BE NO OUTPUT OTHER THAN THE DATE AND SSH SHOULD NOT ASK FOR
PASSWORDS. If you see any output other than date or are prompted for the
password, ssh is not setup correctly and you will need to resolve the
issue and set up ssh again.
The possible causes for failure could be:
1. The server settings in /etc/ssh/sshd_config file do not allow ssh
for user grid.
2. The server may have disabled public key based authentication.
3. The client public key on the server may be outdated.
4. ~grid or ~grid/.ssh on the remote host may not be owned by grid.
5. User may not have passed -shared option for shared remote users or
may be passing the -shared option for non-shared remote users.
6. If there is output in addition to the date, but no password is asked,
it may be a security alert shown as part of company policy. Append the
additional text to the <OMS HOME>/sysman/prov/resources/ignoreMessages.txt file.
------------------------------------------------------------------------
--node1:--
Running /usr/bin/ssh -x -l grid node1 date to verify SSH connectivity has been setup from local host to node1.
IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Please note that being prompted for a passphrase may be OK but being prompted for a password is ERROR.
grid@node1's password:
Thu Mar 14 16:53:35 CST 2019
------------------------------------------------------------------------
--node2:--
Running /usr/bin/ssh -x -l grid node2 date to verify SSH connectivity has been setup from local host to node2.
IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Please note that being prompted for a passphrase may be OK but being prompted for a password is ERROR.
grid@node2's password:
Thu Mar 14 16:53:38 CST 2019
------------------------------------------------------------------------
------------------------------------------------------------------------
Verifying SSH connectivity has been setup from node1 to node1
IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL.
grid@node1's password:
bash: -c: line 0: unexpected EOF while looking for matching `"'
bash: -c: line 1: syntax error: unexpected end of file
------------------------------------------------------------------------
------------------------------------------------------------------------
Verifying SSH connectivity has been setup from node1 to node2
IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL.
grid@node1's password:
bash: -c: line 0: unexpected EOF while looking for matching `"'
bash: -c: line 1: syntax error: unexpected end of file
------------------------------------------------------------------------
-Verification from complete-
SSH verification complete.
[oracle@node1 bin]$ ./runSSHSetup.sh -user oracle -hosts "node1 node2" -advanced -noPromptPassphrase
This script will setup SSH Equivalence from the host 'node1' to specified remote hosts.

ORACLE_HOME = /oracle/app/oracle/product/11.2.0/db_1
JAR_LOC = /oracle/app/oracle/product/11.2.0/db_1/oui/jlib
SSH_LOC = /oracle/app/oracle/product/11.2.0/db_1/oui/jlib
OUI_LOC = /oracle/app/oracle/product/11.2.0/db_1/oui
JAVA_HOME = /oracle/app/oracle/product/11.2.0/db_1/jdk

Checking if the remote hosts are reachable.
ClusterLogger - log file location: /oracle/app/oracle/product/11.2.0/db_1/oui/bin/Logs/remoteInterfaces2019-03-14_04-54-22-PM.log
Failed Nodes : node1 node2
Remote host reachability check succeeded.
All hosts are reachable. Proceeding further...


NOTE :
As part of the setup procedure, this script will use ssh and scp to copy
files between the local host and the remote hosts. You may be prompted for
the password during the execution of the script.
AS PER SSH REQUIREMENTS, THIS SCRIPT WILL SECURE THE USER HOME DIRECTORY
AND THE .ssh DIRECTORY BY REVOKING GROUP AND WORLD WRITE PRIVILEDGES TO THESE
directories.

Do you want to continue and let the script make the above mentioned changes (yes/no)?
yes

If The files containing the client public and private keys already exist on the local host. The current private key may or may not have a passphrase associated with it. In case you remember the passphrase and do not want to re-run ssh-keygen, type 'no'. If youtype 'yes', the script will remove the old private/public key files and, any previous SSH user setups would be reset.
Enter 'yes', 'no'
yes

Enter the password:
Logfile Location : /oracle/app/oracle/product/11.2.0/db_1/oui/bin/SSHSetup2019-03-14_04-55-35-PM
Checking binaries on remote hosts...
Doing SSHSetup...
Please be patient, this operation might take sometime...Dont press Ctrl+C...
ClusterLogger - log file location: /oracle/app/oracle/product/11.2.0/db_1/oui/bin/Logs/remoteInterfaces2019-03-14_04-55-36-PM.log
Plugin : oracle.sysman.prov.remoteinterfaces.plugins.RemoteCommandSSH found in class path
 Changing Default Plugin from  : oracle.sysman.prov.remoteinterfaces.plugins.RemoteCommandSSH to : oracle.sysman.prov.remoteinterfaces.plugins.RemoteCommandSSH

Local Platform:- Linux

------------------------------------------------------------------------
Verifying SSH setup
===================
The script will now run the date command on the remote nodes using ssh
to verify if ssh is setup correctly. IF THE SETUP IS CORRECTLY SETUP,
THERE SHOULD BE NO OUTPUT OTHER THAN THE DATE AND SSH SHOULD NOT ASK FOR
PASSWORDS. If you see any output other than date or are prompted for the
password, ssh is not setup correctly and you will need to resolve the
issue and set up ssh again.
The possible causes for failure could be:
1. The server settings in /etc/ssh/sshd_config file do not allow ssh
for user oracle.
2. The server may have disabled public key based authentication.
3. The client public key on the server may be outdated.
4. ~oracle or ~oracle/.ssh on the remote host may not be owned by oracle.
5. User may not have passed -shared option for shared remote users or
may be passing the -shared option for non-shared remote users.
6. If there is output in addition to the date, but no password is asked,
it may be a security alert shown as part of company policy. Append the
additional text to the <OMS HOME>/sysman/prov/resources/ignoreMessages.txt file.
------------------------------------------------------------------------
--node1:--
Running /usr/bin/ssh -x -l oracle node1 date to verify SSH connectivity has been setupfrom local host to node1.
IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Please note that being prompted for a passphrase may be OK but being prompted for a password is ERROR.
Thu Mar 14 16:56:14 CST 2019
------------------------------------------------------------------------
--node2:--
Running /usr/bin/ssh -x -l oracle node2 date to verify SSH connectivity has been setupfrom local host to node2.
IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Please note that being prompted for a passphrase may be OK but being prompted for a password is ERROR.
Thu Mar 14 16:56:14 CST 2019
------------------------------------------------------------------------
------------------------------------------------------------------------
Verifying SSH connectivity has been setup from node1 to node1
IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL.
bash: -c: line 0: unexpected EOF while looking for matching `"'
bash: -c: line 1: syntax error: unexpected end of file
------------------------------------------------------------------------
------------------------------------------------------------------------
Verifying SSH connectivity has been setup from node1 to node2
IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL.
bash: -c: line 0: unexpected EOF while looking for matching `"'
bash: -c: line 1: syntax error: unexpected end of file
------------------------------------------------------------------------
-Verification from complete-
SSH verification complete.
[oracle@node1 bin]$

02,运行命令

在 grid用户下执行

./addNode.sh  "CLUSTER_NEW_NODES={node2}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={node2vip}" "CLUSTER_NEW_PRIVATE_NODE_NAMES={node2prv}"

 

[grid@node1 bin]$ ./addNode.sh  "CLUSTER_NEW_NODES={node2}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={node2vip}" "CLUSTER_NEW_PRIVATE_NODE_NAMES={node2prv}"

Performing pre-checks for node addition

Checking node reachability...
Node reachability check passed from node "node1"


Checking user equivalence...
User equivalence check passed for user "grid"

WARNING:
Node "node2" already appears to be part of cluster

Pre-check for node addition was successful.
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.  Actual 1948 MB    Passed
Oracle Universal Installer, Version 11.2.0.4.0 Production
Copyright (C) 1999, 2013, Oracle. All rights reserved.


Performing tests to see whether nodes node2 are available
............................................................... 100% Done.

.
-----------------------------------------------------------------------------
Cluster Node Addition Summary
Global Settings
  Source: /oracle/app/oracle/product/11.2.0/db_1
  New Nodes
Space Requirements
  New Nodes
      node2
        /: Required 4.49GB : Available 32.27GB
Installed Products
  Product Names
      Oracle Database 11g 11.2.0.4.0
      Java Development Kit 1.5.0.51.10
      Installer SDK Component 11.2.0.4.0
      Oracle One-Off Patch Installer 11.2.0.3.4
      Oracle Universal Installer 11.2.0.4.0
      Oracle USM Deconfiguration 11.2.0.4.0
      Oracle Configuration Manager Deconfiguration 10.3.1.0.0
      Oracle DBCA Deconfiguration 11.2.0.4.0
      Oracle RAC Deconfiguration 11.2.0.4.0
      Oracle Database Deconfiguration 11.2.0.4.0
      Oracle Configuration Manager Client 10.3.2.1.0
      Oracle Configuration Manager 10.3.8.1.0
      Oracle ODBC Driverfor Instant Client 11.2.0.4.0
      LDAP Required Support Files 11.2.0.4.0
      SSL Required Support Files for InstantClient 11.2.0.4.0
      Bali Share 1.1.18.0.0
      Oracle Extended Windowing Toolkit 3.4.47.0.0
      Oracle JFC Extended Windowing Toolkit 4.2.36.0.0
      Oracle Real Application Testing 11.2.0.4.0
      Oracle Database Vault J2EE Application 11.2.0.4.0
      Oracle Label Security 11.2.0.4.0
      Oracle Data Mining RDBMS Files 11.2.0.4.0
      Oracle OLAP RDBMS Files 11.2.0.4.0
      Oracle OLAP API 11.2.0.4.0
      Platform Required Support Files 11.2.0.4.0
      Oracle Database Vault option 11.2.0.4.0
      Oracle RAC Required Support Files-HAS 11.2.0.4.0
      SQL*Plus Required Support Files 11.2.0.4.0
      Oracle Display Fonts 9.0.2.0.0
      Oracle Ice Browser 5.2.3.6.0
      Oracle JDBC Server Support Package 11.2.0.4.0
      Oracle SQL Developer 11.2.0.4.0
      Oracle Application Express 11.2.0.4.0
      XDK Required Support Files 11.2.0.4.0
      RDBMS Required Support Files for Instant Client 11.2.0.4.0
      SQLJ Runtime 11.2.0.4.0
      Database Workspace Manager 11.2.0.4.0
      RDBMS Required Support Files Runtime 11.2.0.4.0
      Oracle Globalization Support 11.2.0.4.0
      Exadata Storage Server 11.2.0.1.0
      Provisioning Advisor Framework 10.2.0.4.3
      Enterprise Manager Database Plugin -- Repository Support 11.2.0.4.0
      Enterprise Manager Repository Core Files 10.2.0.4.5
      Enterprise Manager Database Plugin -- Agent Support 11.2.0.4.0
      Enterprise Manager Grid Control Core Files 10.2.0.4.5
      Enterprise Manager Common Core Files 10.2.0.4.5
      Enterprise Manager Agent Core Files 10.2.0.4.5
      RDBMS Required Support Files 11.2.0.4.0
      regexp 2.1.9.0.0
      Agent Required Support Files 10.2.0.4.5
      Oracle 11g Warehouse Builder Required Files 11.2.0.4.0
      Oracle Notification Service (eONS) 11.2.0.4.0
      Oracle Text Required Support Files 11.2.0.4.0
      Parser Generator Required Support Files 11.2.0.4.0
      Oracle Database 11g Multimedia Files 11.2.0.4.0
      Oracle Multimedia Java Advanced Imaging 11.2.0.4.0
      Oracle Multimedia Annotator 11.2.0.4.0
      Oracle JDBC/OCI Instant Client 11.2.0.4.0
      Oracle Multimedia Locator RDBMS Files 11.2.0.4.0
      Precompiler Required Support Files 11.2.0.4.0
      Oracle Core Required Support Files 11.2.0.4.0
      Sample Schema Data 11.2.0.4.0
      Oracle Starter Database 11.2.0.4.0
      Oracle Message Gateway Common Files 11.2.0.4.0
      Oracle XML Query 11.2.0.4.0
      XML Parser for Oracle JVM 11.2.0.4.0
      Oracle Help For Java 4.2.9.0.0
      Installation Plugin Files 11.2.0.4.0
      Enterprise Manager Common Files 10.2.0.4.5
      Expat libraries 2.0.1.0.1
      Deinstallation Tool 11.2.0.4.0
      Oracle Quality of Service Management (Client) 11.2.0.4.0
      Perl Modules 5.10.0.0.1
      JAccelerator (COMPANION) 11.2.0.4.0
      Oracle Containers for Java 11.2.0.4.0
      Perl Interpreter 5.10.0.0.2
      Oracle Net Required Support Files 11.2.0.4.0
      Secure Socket Layer 11.2.0.4.0
      Oracle Universal Connection Pool 11.2.0.4.0
      Oracle JDBC/THIN Interfaces 11.2.0.4.0
      Oracle Multimedia Client Option 11.2.0.4.0
      Oracle Java Client 11.2.0.4.0
      Character Set Migration Utility 11.2.0.4.0
      Oracle Code Editor 1.2.1.0.0I
      PL/SQL Embedded Gateway 11.2.0.4.0
      OLAP SQL Scripts 11.2.0.4.0
      Database SQL Scripts 11.2.0.4.0
      Oracle Locale Builder 11.2.0.4.0
      Oracle Globalization Support 11.2.0.4.0
      SQL*Plus Files for Instant Client 11.2.0.4.0
      Required Support Files 11.2.0.4.0
      Oracle Database User Interface 2.2.13.0.0
      Oracle ODBC Driver 11.2.0.4.0
      Oracle Notification Service 11.2.0.3.0
      XML Parser for Java 11.2.0.4.0
      Oracle Security Developer Tools 11.2.0.4.0
      Oracle Wallet Manager 11.2.0.4.0
      Cluster Verification Utility Common Files 11.2.0.4.0
      Oracle Clusterware RDBMS Files 11.2.0.4.0
      Oracle UIX 2.2.24.6.0
      Enterprise Manager plugin Common Files 11.2.0.4.0
      HAS Common Files 11.2.0.4.0
      Precompiler Common Files 11.2.0.4.0
      Installation Common Files 11.2.0.4.0
      Oracle Help for the  Web 2.0.14.0.0
      Oracle LDAP administration 11.2.0.4.0
      Buildtools Common Files 11.2.0.4.0
      Assistant Common Files 11.2.0.4.0
      Oracle Recovery Manager 11.2.0.4.0
      PL/SQL 11.2.0.4.0
      Generic Connectivity Common Files 11.2.0.4.0
      Oracle Database Gateway for ODBC 11.2.0.4.0
      Oracle Programmer 11.2.0.4.0
      Oracle Database Utilities 11.2.0.4.0
      Enterprise Manager Agent 10.2.0.4.5
      SQL*Plus 11.2.0.4.0
      Oracle Netca Client 11.2.0.4.0
      Oracle Multimedia Locator 11.2.0.4.0
      Oracle Call Interface (OCI) 11.2.0.4.0
      Oracle Multimedia 11.2.0.4.0
      Oracle Net 11.2.0.4.0
      Oracle XML Development Kit 11.2.0.4.0
      Oracle Internet Directory Client 11.2.0.4.0
      Database Configuration and Upgrade Assistants 11.2.0.4.0
      Oracle JVM 11.2.0.4.0
      Oracle Advanced Security 11.2.0.4.0
      Oracle Net Listener 11.2.0.4.0
      Oracle Enterprise Manager Console DB 11.2.0.4.0
      HAS Files for DB 11.2.0.4.0
      Oracle Text 11.2.0.4.0
      Oracle Net Services 11.2.0.4.0
      Oracle Database 11g 11.2.0.4.0
      Oracle OLAP 11.2.0.4.0
      Oracle Spatial 11.2.0.4.0
      Oracle Partitioning 11.2.0.4.0
      Enterprise Edition Options 11.2.0.4.0
-----------------------------------------------------------------------------


Instantiating scripts for add node (Thursday, March 14, 2019 5:02:43 PM CST)
.                                                                1% Done.
Instantiation of add node scripts complete
SEVERE:Abnormal program termination. An internal error has occured. Please provide thefollowing files to Oracle Support :

"/oracle/app/oraInventory/logs/addNodeActions2019-03-14_05-02-33PM.log"
"/oracle/app/oraInventory/logs/oraInstall2019-03-14_05-02-33PM.err"
"/oracle/app/oraInventory/logs/oraInstall2019-03-14_05-02-33PM.out"
[grid@node1 bin]$

二,添加节点(二)

新节点配置--->>节点配置前请看往期RAC基础配置,一些基础环境需要先配置好,共享磁盘也需要完成

01,配置互信

node3(新节点)--oracle用户:
su - oracle
mkdir ~/.ssh
ssh-keygen -t rsa
ssh-keygen -t dsa

在存活的节点操作:
su - oracle
ssh node3 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
ssh node3 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
scp ~/.ssh/authorized_keys node3:~/.ssh/authorized_keys

node3(新节点)--grid用户:
su - grid
mkdir ~/.ssh
ssh-keygen -t rsa
ssh-keygen -t dsa

在存活的节点操作:
su - grid
ssh node3 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
ssh node3 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
scp ~/.ssh/authorized_keys node3:~/.ssh/authorized_keys

02,验证节点是否符合要求

在存活节点操作

[grid@node1 ~]$ cluvfy stage -pre nodeadd -n node3

Performing pre-checks for node addition

Checking node reachability...
Node reachability check passed from node "node1"


Checking user equivalence...
User equivalence check passed for user "grid"

WARNING:
Unable to obtain VIP information from node "node1".


Checking CRS integrity...

Clusterware version consistency passed

CRS integrity check passed

Checking shared resources...

Checking CRS home location...
"/oracle/11.2.0/grid/crs" is shared
Shared resources check for node addition passed


Checking node connectivity...

Checking hosts config file...

Verification of the hosts config file successful

Check: Node connectivity for interface "eth0"
Node connectivity passed for interface "eth0"
TCP connectivity check passed for subnet "192.168.0.0"


Check: Node connectivity for interface "eth1"
Node connectivity passed for interface "eth1"
TCP connectivity check passed for subnet "172.168.0.0"

Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.0.0".
Subnet mask consistency check passed for subnet "172.168.0.0".
Subnet mask consistency check passed.

Node connectivity check passed

Checking multicast communication...

Checking subnet "192.168.0.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.0.0" for multicast communication with multicast group "230.0.1.0" passed.

Checking subnet "172.168.0.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "172.168.0.0" for multicast communication with multicast group "230.0.1.0" passed.

Check of multicast communication passed.
Total memory check passed
Available memory check passed
Swap space check failed
Check failed on nodes:
    node3,node1
Free disk space check passed for "node3:/oracle/11.2.0/grid/crs,node3:/tmp"
Free disk space check passed for "node1:/oracle/11.2.0/grid/crs,node1:/tmp"
Check for multiple users with UID value 501 passed
User existence check passed for "grid"
Run level check passed
Hard limits check passed for "maximum open file descriptors"
Soft limits check passed for "maximum open file descriptors"
Hard limits check passed for "maximum user processes"
Soft limits check passed for "maximum user processes"
System architecture check passed

WARNING:
PRVF-7524 : Kernel version is not consistent across all the nodes.
Kernel version = "3.10.0-693.el7.x86_64" found on nodes: node3.
Kernel version = "2.6.32-754.11.1.el6.x86_64" found on nodes: node1.
Kernel version check passed
Kernel parameter check passed for "semmsl"
Kernel parameter check passed for "semmns"
Kernel parameter check passed for "semopm"
Kernel parameter check passed for "semmni"
Kernel parameter check passed for "shmmax"
Kernel parameter check passed for "shmmni"
Kernel parameter check passed for "shmall"
Kernel parameter check failed for "file-max"
Check failed on nodes:
    node3
Kernel parameter check passed for "ip_local_port_range"
Kernel parameter check passed for "rmem_default"
Kernel parameter check passed for "rmem_max"
Kernel parameter check passed for "wmem_default"
Kernel parameter check passed for "wmem_max"
Kernel parameter check failed for "aio-max-nr"
Check failed on nodes:
    node3
Package existence check passed for "make"
Package existence check passed for "binutils"
Package existence check passed for "gcc(x86_64)"
Package existence check passed for "libaio(x86_64)"
Package existence check passed for "glibc(x86_64)"
Package existence check passed for "compat-libstdc++-33(x86_64)"
Package existence check passed for "elfutils-libelf(x86_64)"
Package existence check passed for "elfutils-libelf-devel"
Package existence check passed for "glibc-common"
Package existence check passed for "glibc-devel(x86_64)"
Package existence check passed for "glibc-headers"
Package existence check passed for "gcc-c++(x86_64)"
Package existence check passed for "libaio-devel(x86_64)"
Package existence check passed for "libgcc(x86_64)"
Package existence check passed for "libstdc++(x86_64)"
Package existence check passed for "libstdc++-devel(x86_64)"
Package existence check passed for "sysstat"
Package existence check failed for "pdksh"
Check failed on nodes:
    node3,node1
Package existence check passed for "expat(x86_64)"
Check for multiple users with UID value 0 passed
Current group ID check passed

Starting check for consistency of primary group of root user

Check for consistency of root user's primary group passed

Checking OCR integrity...

OCR integrity check passed

Checking Oracle Cluster Voting Disk configuration...

Oracle Cluster Voting Disk configuration check passed
Time zone consistency check passed

Starting Clock synchronization checks using Network Time Protocol(NTP)...

NTP Configuration file check started...
No NTP Daemons or Services were found to be running

Clock synchronization check using Network Time Protocol(NTP) passed


User "grid" is not part of "root" group. Check passed
Checking consistency of file "/etc/resolv.conf" across nodes

File "/etc/resolv.conf" does not have both domain and search entries defined
domain entry in file "/etc/resolv.conf" is consistent across nodes
search entry in file "/etc/resolv.conf" is consistent across nodes
The DNS response time for an unreachable node is within acceptable limit on all nodes

File "/etc/resolv.conf" is consistent across nodes


Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ...
Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed


Pre-check for node addition was unsuccessful on all the nodes.

03,添加节点

 

[grid@node1 ~]$ cd /oracle/11.2.0/grid/crs/oui/bin/
[grid@node1 bin]$ pwd
/oracle/11.2.0/grid/crs/oui/bin
[grid@node1 bin]$ ./addNode.sh  "CLUSTER_NEW_NODES={node3}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={node3vip}" "CLUSTER_NEW_PRIVATE_NODE_NAMES={node3priv}"
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.  Actual 1999 MB    Passed
Oracle Universal Installer, Version 11.2.0.4.0 Production
Copyright (C) 1999, 2013, Oracle. All rights reserved.


Performing tests to see whether nodes node2,node3 are available
............................................................... 100% Done.

Error ocurred while retrieving node numbers of the existing nodes. Please check if clusterware home is properly configured.
SEVERE:Error ocurred while retrieving node numbers of the existing nodes. Please check if clusterware home is properly configured.

 

好像都会出现这个问题

 官方解释是需要执行

 

[grid@node1 bin]$  ./detachHome.sh  ---这个在图形界面执行不要用xshell等连接工具
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.  Actual 1999 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /oracle/app/oraInventory
Can't connect to X11 window server using 'localhost:10.0' as the value of the DISPLAY variable.
localhost:10.0
localhost:10.0
'DetachHome' failed. 

[grid@node1 bin]$  ./attachHome.sh  ---图形界面执行

 

节点更新

 

[grid@node1 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME  "CLUSTER_NODES={node1}" CRS=TRUE
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.  Actual 1999 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /oracle/app/oraInventory
'UpdateNodeList' was successful.

 

继续执行添加节点

 

[grid@node1 bin]$ ./addNode.sh  "CLUSTER_NEW_NODES={node3}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={node3vip}" "CLUSTER_NEW_PRIVATE_NODE_NAMES={node3priv}"
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.  Actual 1999 MB    Passed
Oracle Universal Installer, Version 11.2.0.4.0 Production
Copyright (C) 1999, 2013, Oracle. All rights reserved.


Performing tests to see whether nodes node3 are available
............................................................... 100% Done.

.
-----------------------------------------------------------------------------
Cluster Node Addition Summary
Global Settings
  Source: /oracle/11.2.0/grid/crs
  New Nodes
Space Requirements
  New Nodes
      node3
        /: Required 5.29GB : Available 41.26GB
Installed Products
  Product Names
      Oracle Grid Infrastructure 11g 11.2.0.4.0
      Java Development Kit 1.5.0.51.10
      Installer SDK Component 11.2.0.4.0
      Oracle One-Off Patch Installer 11.2.0.3.4
      Oracle Universal Installer 11.2.0.4.0
      Oracle RAC Required Support Files-HAS 11.2.0.4.0
      Oracle USM Deconfiguration 11.2.0.4.0
      Oracle Configuration Manager Deconfiguration 10.3.1.0.0
      Enterprise Manager Common Core Files 10.2.0.4.5
      Oracle DBCA Deconfiguration 11.2.0.4.0
      Oracle RAC Deconfiguration 11.2.0.4.0
      Oracle Quality of Service Management (Server) 11.2.0.4.0
      Installation Plugin Files 11.2.0.4.0
      Universal Storage Manager Files 11.2.0.4.0
      Oracle Text Required Support Files 11.2.0.4.0
      Automatic Storage Management Assistant 11.2.0.4.0
      Oracle Database 11g Multimedia Files 11.2.0.4.0
      Oracle Multimedia Java Advanced Imaging 11.2.0.4.0
      Oracle Globalization Support 11.2.0.4.0
      Oracle Multimedia Locator RDBMS Files 11.2.0.4.0
      Oracle Core Required Support Files 11.2.0.4.0
      Bali Share 1.1.18.0.0
      Oracle Database Deconfiguration 11.2.0.4.0
      Oracle Quality of Service Management (Client) 11.2.0.4.0
      Expat libraries 2.0.1.0.1
      Oracle Containers for Java 11.2.0.4.0
      Perl Modules 5.10.0.0.1
      Secure Socket Layer 11.2.0.4.0
      Oracle JDBC/OCI Instant Client 11.2.0.4.0
      Oracle Multimedia Client Option 11.2.0.4.0
      LDAP Required Support Files 11.2.0.4.0
      Character Set Migration Utility 11.2.0.4.0
      Perl Interpreter 5.10.0.0.2
      PL/SQL Embedded Gateway 11.2.0.4.0
      OLAP SQL Scripts 11.2.0.4.0
      Database SQL Scripts 11.2.0.4.0
      Oracle Extended Windowing Toolkit 3.4.47.0.0
      SSL Required Support Files for InstantClient 11.2.0.4.0
      SQL*Plus Files for Instant Client 11.2.0.4.0
      Oracle Net Required Support Files 11.2.0.4.0
      Oracle Database User Interface 2.2.13.0.0
      RDBMS Required Support Files for Instant Client 11.2.0.4.0
      RDBMS Required Support Files Runtime 11.2.0.4.0
      XML Parser for Java 11.2.0.4.0
      Oracle Security Developer Tools 11.2.0.4.0
      Oracle Wallet Manager 11.2.0.4.0
      Enterprise Manager plugin Common Files 11.2.0.4.0
      Platform Required Support Files 11.2.0.4.0
      Oracle JFC Extended Windowing Toolkit 4.2.36.0.0
      RDBMS Required Support Files 11.2.0.4.0
      Oracle Ice Browser 5.2.3.6.0
      Oracle Help For Java 4.2.9.0.0
      Enterprise Manager Common Files 10.2.0.4.5
      Deinstallation Tool 11.2.0.4.0
      Oracle Java Client 11.2.0.4.0
      Cluster Verification Utility Files 11.2.0.4.0
      Oracle Notification Service (eONS) 11.2.0.4.0
      Oracle LDAP administration 11.2.0.4.0
      Cluster Verification Utility Common Files 11.2.0.4.0
      Oracle Clusterware RDBMS Files 11.2.0.4.0
      Oracle Locale Builder 11.2.0.4.0
      Oracle Globalization Support 11.2.0.4.0
      Buildtools Common Files 11.2.0.4.0
      HAS Common Files 11.2.0.4.0
      SQL*Plus Required Support Files 11.2.0.4.0
      XDK Required Support Files 11.2.0.4.0
      Agent Required Support Files 10.2.0.4.5
      Parser Generator Required Support Files 11.2.0.4.0
      Precompiler Required Support Files 11.2.0.4.0
      Installation Common Files 11.2.0.4.0
      Required Support Files 11.2.0.4.0
      Oracle JDBC/THIN Interfaces 11.2.0.4.0
      Oracle Multimedia Locator 11.2.0.4.0
      Oracle Multimedia 11.2.0.4.0
      Assistant Common Files 11.2.0.4.0
      Oracle Net 11.2.0.4.0
      PL/SQL 11.2.0.4.0
      HAS Files for DB 11.2.0.4.0
      Oracle Recovery Manager 11.2.0.4.0
      Oracle Database Utilities 11.2.0.4.0
      Oracle Notification Service 11.2.0.3.0
      SQL*Plus 11.2.0.4.0
      Oracle Netca Client 11.2.0.4.0
      Oracle Advanced Security 11.2.0.4.0
      Oracle JVM 11.2.0.4.0
      Oracle Internet Directory Client 11.2.0.4.0
      Oracle Net Listener 11.2.0.4.0
      Cluster Ready Services Files 11.2.0.4.0
      Oracle Database 11g 11.2.0.4.0
-----------------------------------------------------------------------------


Instantiating scripts for add node (Friday, March 15, 2019 10:20:16 AM CST)
.                                                                1% Done.
Instantiation of add node scripts complete

Copying to remote nodes (Friday, March 15, 2019 10:20:20 AM CST)
............................................................................................... 96% Done.
Home copied to new nodes

Saving inventory on nodes (Friday, March 15, 2019 10:22:44 AM CST)
.                                                              100% Done.
Save inventory complete
WARNING:A new inventory has been created on one or more nodes in this session. However, it has not yet been registered as the central inventory of this system.
To register the new inventory please run the script at '/oracle/app/oraInventory/orainstRoot.sh' with root privileges on nodes'node3'.
If you do not register the inventory, you may not be able to update or patch the products you installed.
The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes.
/oracle/app/oraInventory/orainstRoot.sh #On nodes node3
/oracle/11.2.0/grid/crs/root.sh #On nodes node3
To execute the configuration scripts:
    1. Open a terminal window
    2. Log in as "root"
    3. Run the scripts in each cluster node

The Cluster Node Addition of /oracle/11.2.0/grid/crs was successful.
Please check '/tmp/silentInstall.log' for more details.

根据命令在node3节点执行脚本,需要用root执行

脚本一:

[grid@node3 ~]$ cd /oracle/app/oraInventory/
[grid@node3 oraInventory]$ ls
backup  ContentsXML  logs  orainstRoot.sh
[grid@node3 oraInventory]$ su
Password:
[root@node3 oraInventory]# sh orainstRoot.sh
Creating the Oracle inventory pointer file (/etc/oraInst.loc)
Changing permissions of /oracle/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /oracle/app/oraInventory to oinstall.
The execution of the script is complete.

脚本二:

[root@node3 oraInventory]# cd /oracle/11.2.0/grid/crs/
[root@node3 crs]# ls
assistants  csmig  deinstall    gnsd          inventory  ldap    nls      oracore      perl      root.sh        suptools
auth        css    demo        gpnp          javavm    lib      oc4j      oraInst.loc  plsql    rootupgrade.sh  sysman
bin          ctss    diagnostics  has            jdbc      md      ohasd    ord          precomp  scheduler      usm
cfgtoollogs  cv      eons        hs            jdk        mdns    ologgerd  osysmond    racg      slax            utl
clone        dbs    evm          install        jlib      mesg    OPatch    oui          rdbms    sqlplus        wwg
crs          dc_ocm  gipc        instantclient  JRE        network  opmn      owm          relnotes  srvm            xdk
[root@node3 crs]# sh root.sh
Performing root user operation for Oracle 11g

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /oracle/11.2.0/grid/crs

Enter the full pathname of the local bin directory: [/usr/local/bin]:
  Copying dbhome to /usr/local/bin ...
  Copying oraenv to /usr/local/bin ...
  Copying coraenv to /usr/local/bin ...


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /oracle/11.2.0/grid/crs/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
Installing Trace File Analyzer
OLR initialization - successful
Adding Clusterware entries to inittab
ohasd failed to start
Failed to start the Clusterware. Last 20 lines of the alert log follow:
2019-03-14 22:27:03.400: