CRS-4000: Command Start failed, or completed with errors.
Feb 11 23:23:32 RACDB01 grid: [ID 702911 user.error]
exec /u01/app/grid/product/11.2.0/grid/perl/bin/perl -I/u01/app/grid/product/11.2.0/grid/perl/lib /u01/app/grid/product/11.2.0/grid/bin/crswrapexece.pl
HAS Does Not Start After Server Reboot. CRS-4124 And CRS-4000 Errors (文档 ID 1624661.1)
The ownership of perl execution file in GRID_HOME has been changed for some reason. It should be owned by grid user, in this case, it is not “oracle”, but “grid”
Rectify the perl owner to GRID owner. In this case the GRID owner is grid user:
Cluster failed to start due to problem with socket pipe npohasd (文档 ID 1612325.1)
CRS does not start after server reboot and manually start CRS/HAS fails with CRS-4124, CRS-4000:
There is no update in the alert_<hostname>.log or other CRS log files. From the OS system log it shows:
Relinked the binaries and restarted the server again so that init.ohasd came up fine, but ohasd and other daemons wouldn’t start and no sockets get created
OS start S96ohasd, it will wait for init.ohasd to write the pipe.
What happened here is init.ohasd was started, then all socket files got removed by the manual removal, then when you start ohasd again, it will wait there since those socket files was removed manually
Clear all sockets under /var/tmp/.oracle or /tmp/.oracle if any and then open two terminals of the same node, where stack is not coming up.
1) On Terminal 1 , issue as Root user :-
crsctl start crs
2) Simultaneously , on node2 , issue below command as Root user , once npohasd socket has been created.
/bin/dd if=/tmp/.oracle/npohasd of=/dev/null bs=1024 count=1
3) Now if you check on terminal 1 , the CRS stack would start coming up.
ps -ef |grep d.bin
4) Once entire CRS stack is up, you can press CTRL+C and come out of the dd command running on 2nd terminal.
Check and validate all resources are online using
crsctl stat res -t
crsctl stat res -t -init
来自 “ ITPUB博客 ” ，链接：http://blog.itpub.net/7728585/viewspace-1806208/，如需转载，请注明出处，否则将追究法律责任。