注册 登录  
 加关注
   显示下一条  |  关闭
温馨提示!由于新浪微博认证机制调整,您的新浪微博帐号绑定已过期,请重新绑定!立即重新绑定新浪微博》  |  关闭

欢迎光临我的博客

 
 
 

日志

 
 

sun cluster3.2系统安装文档  

2009-01-07 09:33:25|  分类: 双机软件 |  标签: |举报 |字号 订阅

  下载LOFTER 我的照片书  |

sun cluster3.2系统安装文档

一、硬件:

主机: Sun M5000

   

    阵列: EMC 阵列一台

二、软件:

Sun Cluster 3.2

SunCluster_HA_Oracle_3.2

三、OS安装情况:

Solaris10

四、具体安装步骤:

(严格按照EIS: Installation Checklist for SunCluster 3.x Systems

1、按用户要求安装OS

预留/globaldevices 512M for cluster

    metadb 30M for SVM

打最新的补丁集

2、检查和配置的文件如下:

/etc/hosts

/etc/clusters

/etc/defaultdomain

/.rhosts

eeprom local_mac_address?=true

配置/etc/system文件

set ce:ce_taskq_disable=1               ce做传输

set md:mirror_root_flag=1

 

# ndd -set /dev/ip ip_strict_dst_multihoming 0

 

修改文件:

/kernel/drv/nd.conf    修改did设备数量及metaset的数量。

/kernel/drv/scsi_vhci.conf        配置mpxio配置。

 

*以上准备工作完毕即可安装SC3.2

3、在节点1(ismp-sun1)上:

Option:  1

 

 

  *** New Cluster and Cluster Node Menu ***

 

    Please select from any one of the following options:

 

        1) Create a new cluster

        2) Create just the first node of a new cluster on this machine

        3) Add this machine as a node in an existing cluster

 

        ?) Help with menu options

        q) Return to the Main Menu

 

    Option:  2

 

 

  *** Establish Just the First Node of a New Cluster ***

 

 

    This option is used to establish a new cluster using this machine as

    the first node in that cluster.

 

    Before you select this option, the Sun Cluster framework software

    must already be installed. Use the Java Enterprise System (JES)

    installer to install Sun Cluster software.

 

    Press Control-d at any time to return to the Main Menu.

 

 

    Do you want to continue (yes/no) [yes]? 

 

 

  >>> Typical or Custom Mode <<<

 

    This tool supports two modes of operation, Typical mode and Custom.

    For most clusters, you can use Typical mode. However, you might need

    to select the Custom mode option if not all of the Typical defaults

    can be applied to your cluster.

 

    For more information about the differences between Typical and Custom

    modes, select the Help option from the menu.

 

    Please select from one of the following options:

 

        1) Typical

        2) Custom

 

        ?) Help

        q) Return to the Main Menu

 

    Option [1]: 

 

 

  >>> Cluster Name <<<

 

    Each cluster has a name assigned to it. The name can be made up of

    any characters other than whitespace. Each cluster name should be

    unique within the namespace of your enterprise.

 

    What is the name of the cluster you want to establish?  cluster

 

 

  >>> Check <<<

 

    This step allows you to run sccheck(1M) to verify that certain basic

    hardware and software pre-configuration requirements have been met.

    If sccheck(1M) detects potential problems with configuring this

    machine as a cluster node, a report of failed checks is prepared and

    available for display on the screen. Data gathering and report

    generation can take several minutes, depending on system

    configuration.

 

    Do you want to run sccheck (yes/no) [yes]? 

 

    Running sccheck ...

 

scinstall: Requesting explorer data and node report from node1.

scinstall: node1: Explorer finished.

scinstall: node1: Starting single-node checks.

scinstall: node1: Single-node checks finished.

 

   

 

   

Press Enter to continue: 

 

 

  >>> Cluster Nodes <<<

 

    This Sun Cluster release supports a total of up to 16 nodes.

 

    Please list the names of the other nodes planned for the initial

    cluster configuration. List one node name per line. When finished,

    type Control-D:

 

    Node name (Control-D to finish):  node1 

    Node name (Control-D to finish):  node2

    Node name (Control-D to finish):  ^D

 

 

    This is the complete list of nodes:

 

        node1

        node2

 

    Is it correct (yes/no) [yes]? 

 

 

  >>> Cluster Transport Adapters and Cables <<<

 

    You must configure at least two cluster transport adapters for each

    node in the cluster. These are the adapters which attach to the

    private cluster interconnect.

 

    Select the first cluster transport adapter:

 

        1) e1000g1

        2) e1000g2

        3) Other

 

    Option:  1

 

    Will this be a dedicated cluster transport adapter (yes/no) [yes]? 

 

    Searching for any unexpected network traffic on "e1000g1" ... done

    Verification completed. No traffic was detected over a 10 second

    sample period.

 

    Select the second cluster transport adapter:

 

        1) e1000g1

        2) e1000g2

        3) Other

 

    Option:  2

 

    Will this be a dedicated cluster transport adapter (yes/no) [yes]? 

 

    Searching for any unexpected network traffic on "e1000g2" ... done

    Verification completed. No traffic was detected over a 10 second

    sample period.

 

 

 

  >>> Quorum Configuration <<<

 

    Every two-node cluster requires at least one quorum device. By

    default, scinstall will select and configure a shared SCSI quorum

    disk device for you.

 

    This screen allows you to disable the automatic selection and

    configuration of a quorum device.

 

    The only time that you must disable this feature is when ANY of the

    shared storage in your cluster is not qualified for use as a Sun

    Cluster quorum device. If your storage was purchased with your

    cluster, it is qualified. Otherwise, check with your storage vendor

    to determine whether your storage device is supported as Sun Cluster

    quorum device.

 

    If you disable automatic quorum device selection now, or if you

    intend to use a quorum device that is not a shared SCSI disk, you

    must instead use scsetup(1M) to manually configure quorum once both

    nodes have joined the cluster for the first time.

 

    Do you want to disable automatic quorum device selection (yes/no) [no]?  yes

 

 

  >>> Automatic Reboot <<<

 

    Once scinstall has successfully initialized the Sun Cluster software

    for this machine, the machine must be rebooted. After the reboot,

    this machine will be established as the first node in the new cluster.

 

    Do you want scinstall to reboot for you (yes/no) [yes]? 

 

 

  >>> Confirmation <<<

 

    Your responses indicate the following options to scinstall:

 

      scinstall -i \

           -C cluster \

           -F \

           -T node=node1,node=node2,authtype=sys \

           -w netaddr=172.16.0.0,netmask=255.255.248.0,maxnodes=64,maxprivatenets=10 \

           -A trtype=dlpi,name=e1000g1 -A trtype=dlpi,name=e1000g2 \

           -B type=switch,name=switch1 -B type=switch,name=switch2 \

           -m endpoint=:e1000g1,endpoint=switch1 \

           -m endpoint=:e1000g2,endpoint=switch2

 

    Are these the options you want to use (yes/no) [yes]? 

 

    Do you want to continue with this configuration step (yes/no) [yes]? 

 

 

Checking device to use for global devices file system ... done

 

Initializing cluster name to "cluster" ... done

Initializing authentication options ... done

Initializing configuration for adapter "e1000g1" ... done

Initializing configuration for adapter "e1000g2" ... done

Initializing configuration for switch "switch1" ... done

Initializing configuration for switch "switch2" ... done

Initializing configuration for cable ... done

Initializing configuration for cable ... done

Initializing private network address options ... done

 

 

Setting the node ID for "node1" ... done (id=1)

 

 

Checking for global devices global file system ... done

Updating vfstab ... done

 

Verifying that NTP is configured ... done

Initializing NTP configuration ... done

 

Updating nsswitch.conf ... done

 

Adding cluster node entries to /etc/inet/hosts ... done

 

 

Configuring IP multipathing groups ...done

 

Verifying that power management is NOT configured ... done

Unconfiguring power management ... done

/etc/power.conf has been renamed to /etc/power.conf.100408143602

Power management is incompatible with the HA goals of the cluster.

Please do not attempt to re-configure power management.

 

Ensure network routing is disabled ... done

Network routing has been disabled on this node by creating /etc/notrouter.

Having a cluster node act as a router is not supported by Sun Cluster.

Please do not re-enable network routing.

 

Log file - /var/cluster/logs/install/scinstall.log.1585

 

 

Rebooting ...

 

updating /platform/i86pc/boot_archive...this may take a minute

 

 

安装完ismp-sun1后,重新启动后安装ismp-sun2

 

  root@node2 # scinstall

 

  *** Main Menu ***

 

    Please select from one of the following (*) options:

 

      * 1) Create a new cluster or add a cluster node

        2) Configure a cluster to be JumpStarted from this install server

        3) Manage a dual-partition upgrade

        4) Upgrade this cluster node

        5) Print release information for this cluster node

 

      * ?) Help with menu options

      * q) Quit

 

    Option:  1

 

 

  *** New Cluster and Cluster Node Menu ***

 

    Please select from any one of the following options:

 

        1) Create a new cluster

        2) Create just the first node of a new cluster on this machine

        3) Add this machine as a node in an existing cluster

 

        ?) Help with menu options

        q) Return to the Main Menu

 

    Option:  3

 

 

  *** Add a Node to an Existing Cluster ***

 

 

    This option is used to add this machine as a node in an already

    established cluster. If this is a new cluster, there may only be a

    single node which has established itself in the new cluster.

 

    Before you select this option, the Sun Cluster framework software

    must already be installed. Use the Java Enterprise System (JES)

    installer to install Sun Cluster software.

 

    Press Control-d at any time to return to the Main Menu.

 

 

    Do you want to continue (yes/no) [yes]? 

 

 

  >>> Typical or Custom Mode <<<

 

    This tool supports two modes of operation, Typical mode and Custom.

    For most clusters, you can use Typical mode. However, you might need

    to select the Custom mode option if not all of the Typical defaults

    can be applied to your cluster.

 

    For more information about the differences between Typical and Custom

    modes, select the Help option from the menu.

 

    Please select from one of the following options:

 

        1) Typical

        2) Custom

 

        ?) Help

        q) Return to the Main Menu

 

    Option [1]: 

 

 

  >>> Sponsoring Node <<<

 

    For any machine to join a cluster, it must identify a node in that

    cluster willing to "sponsor" its membership in the cluster. When

    configuring a new cluster, this "sponsor" node is typically the first

    node used to build the new cluster. However, if the cluster is

    already established, the "sponsoring" node can be any node in that

    cluster.

 

    Already established clusters can keep a list of hosts which are able

    to configure themselves as new cluster members. This machine should

    be in the join list of any cluster which it tries to join. If the

    list does not include this machine, you may need to add it by using

    claccess(1CL) or other tools.

 

    And, if the target cluster uses DES to authenticate new machines

    attempting to configure themselves as new cluster members, the

    necessary encryption keys must be configured before any attempt to

    join.

 

    What is the name of the sponsoring node?  node1

 

 

  >>> Cluster Name <<<

 

    Each cluster has a name assigned to it. When adding a node to the

    cluster, you must identify the name of the cluster you are attempting

    to join. A sanity check is performed to verify that the "sponsoring"

    node is a member of that cluster.

 

    What is the name of the cluster you want to join [cluster]? 

 

    Attempting to contact "node1" ... done

 

    Cluster name "cluster" is correct.

   

Press Enter to continue: 

 

 

  >>> Check <<<

 

    This step allows you to run sccheck(1M) to verify that certain basic

    hardware and software pre-configuration requirements have been met.

    If sccheck(1M) detects potential problems with configuring this

    machine as a cluster node, a report of failed checks is prepared and

    available for display on the screen. Data gathering and report

    generation can take several minutes, depending on system

    configuration.

 

    Do you want to run sccheck (yes/no) [yes]?  no

 

 

  >>> Autodiscovery of Cluster Transport <<<

 

    If you are using Ethernet or Infiniband adapters as the cluster

    transport adapters, autodiscovery is the best method for configuring

    the cluster transport.

 

    Do you want to use autodiscovery (yes/no) [yes]?  no

 

 

  >>> Cluster Transport Adapters and Cables <<<

 

    You must configure at least two cluster transport adapters for each

    node in the cluster. These are the adapters which attach to the

    private cluster interconnect.

 

    Select the first cluster transport adapter:

 

        1) e1000g1

        2) e1000g2

        3) Other

 

    Option:  2

 

    Will this be a dedicated cluster transport adapter (yes/no) [yes]? 

 

    Select the second cluster transport adapter:

 

        1) e1000g1

        2) e1000g2

        3) Other

 

    Option:  1

 

    Will this be a dedicated cluster transport adapter (yes/no) [yes]?  ^C

 

 

 

scinstall:  scinstall did NOT complete successfully!

 

 

Log file - /var/cluster/logs/install/scinstall.log.5910

 

root@node2 # scinstall

 

  *** Main Menu ***

 

    Please select from one of the following (*) options:

 

      * 1) Create a new cluster or add a cluster node

        2) Configure a cluster to be JumpStarted from this install server

        3) Manage a dual-partition upgrade

        4) Upgrade this cluster node

        5) Print release information for this cluster node

 

      * ?) Help with menu options

      * q) Quit

 

    Option:  1

 

 

  *** New Cluster and Cluster Node Menu ***

 

    Please select from any one of the following options:

 

        1) Create a new cluster

        2) Create just the first node of a new cluster on this machine

        3) Add this machine as a node in an existing cluster

 

        ?) Help with menu options

        q) Return to the Main Menu

 

    Option:  3

 

 

  *** Add a Node to an Existing Cluster ***

 

 

    This option is used to add this machine as a node in an already

    established cluster. If this is a new cluster, there may only be a

    single node which has established itself in the new cluster.

 

    Before you select this option, the Sun Cluster framework software

    must already be installed. Use the Java Enterprise System (JES)

    installer to install Sun Cluster software.

 

    Press Control-d at any time to return to the Main Menu.

 

 

    Do you want to continue (yes/no) [yes]? 

 

 

  >>> Typical or Custom Mode <<<

 

    This tool supports two modes of operation, Typical mode and Custom.

    For most clusters, you can use Typical mode. However, you might need

    to select the Custom mode option if not all of the Typical defaults

    can be applied to your cluster.

 

    For more information about the differences between Typical and Custom

    modes, select the Help option from the menu.

 

    Please select from one of the following options:

 

        1) Typical

        2) Custom

 

        ?) Help

        q) Return to the Main Menu

 

    Option [1]: 

 

 

  >>> Sponsoring Node <<<

 

    For any machine to join a cluster, it must identify a node in that

    cluster willing to "sponsor" its membership in the cluster. When

    configuring a new cluster, this "sponsor" node is typically the first

    node used to build the new cluster. However, if the cluster is

    already established, the "sponsoring" node can be any node in that

    cluster.

 

    Already established clusters can keep a list of hosts which are able

    to configure themselves as new cluster members. This machine should

    be in the join list of any cluster which it tries to join. If the

    list does not include this machine, you may need to add it by using

    claccess(1CL) or other tools.

 

    And, if the target cluster uses DES to authenticate new machines

    attempting to configure themselves as new cluster members, the

    necessary encryption keys must be configured before any attempt to

    join.

 

    What is the name of the sponsoring node?  node1

 

 

  >>> Cluster Name <<<

 

    Each cluster has a name assigned to it. When adding a node to the

    cluster, you must identify the name of the cluster you are attempting

    to join. A sanity check is performed to verify that the "sponsoring"

    node is a member of that cluster.

 

    What is the name of the cluster you want to join [cluster]? 

 

    Attempting to contact "node1" ... done

 

    Cluster name "cluster" is correct.

   

Press Enter to continue: 

 

 

  >>> Check <<<

 

    This step allows you to run sccheck(1M) to verify that certain basic

    hardware and software pre-configuration requirements have been met.

    If sccheck(1M) detects potential problems with configuring this

    machine as a cluster node, a report of failed checks is prepared and

    available for display on the screen. Data gathering and report

    generation can take several minutes, depending on system

    configuration.

 

    Do you want to run sccheck (yes/no) [yes]?  no

 

 

  >>> Autodiscovery of Cluster Transport <<<

 

    If you are using Ethernet or Infiniband adapters as the cluster

    transport adapters, autodiscovery is the best method for configuring

    the cluster transport.

 

    Do you want to use autodiscovery (yes/no) [yes]?  no

 

 

  >>> Cluster Transport Adapters and Cables <<<

 

    You must configure at least two cluster transport adapters for each

    node in the cluster. These are the adapters which attach to the

    private cluster interconnect.

 

    Select the first cluster transport adapter:

 

        1) e1000g1

        2) e1000g2

        3) Other

 

    Option:  1

 

    Will this be a dedicated cluster transport adapter (yes/no) [yes]? 

 

    Select the second cluster transport adapter:

 

        1) e1000g1

        2) e1000g2

        3) Other

 

    Option:  2

 

    Will this be a dedicated cluster transport adapter (yes/no) [yes]? 

 

 

  >>> Automatic Reboot <<<

 

    Once scinstall has successfully initialized the Sun Cluster software

    for this machine, the machine must be rebooted. The reboot will cause

    this machine to join the cluster for the first time.

 

    Do you want scinstall to reboot for you (yes/no) [yes]? 

 

 

  >>> Confirmation <<<

 

    Your responses indicate the following options to scinstall:

 

      scinstall -i \

           -C cluster \

           -N node1 \

           -A trtype=dlpi,name=e1000g1 -A trtype=dlpi,name=e1000g2 \

           -m endpoint=:e1000g1,endpoint=switch1 \

           -m endpoint=:e1000g2,endpoint=switch2

 

    Are these the options you want to use (yes/no) [yes]? 

 

    Do you want to continue with this configuration step (yes/no) [yes]? 

 

 

Checking device to use for global devices file system ... done

 

Adding node "node2" to the cluster configuration ... done

Adding adapter "e1000g1" to the cluster configuration ... done

Adding adapter "e1000g2" to the cluster configuration ... done

Adding cable to the cluster configuration ... done

Adding cable to the cluster configuration ... done

 

Copying the config from "node1" ... done

 

Copying the postconfig file from "node1" if it exists ... done

No postconfig file found on "node1", continuing

done

 

 

Setting the node ID for "node2" ... done (id=2)

 

Verifying the major number for the "did" driver with "node1" ... done

 

Checking for global devices global file system ... done

Updating vfstab ... done

 

Verifying that NTP is configured ... done

Initializing NTP configuration ... done

 

Updating nsswitch.conf ... done

 

Adding cluster node entries to /etc/inet/hosts ... done

 

 

Configuring IP multipathing groups ...done

 

Verifying that power management is NOT configured ... done

Unconfiguring power management ... done

/etc/power.conf has been renamed to /etc/power.conf.100408144334

Power management is incompatible with the HA goals of the cluster.

Please do not attempt to re-configure power management.

 

Ensure network routing is disabled ... done

Network routing has been disabled on this node by creating /etc/notrouter.

Having a cluster node act as a router is not supported by Sun Cluster.

Please do not re-enable network routing.

 

Updating file ("ntp.conf.cluster") on node node1 ... done

Updating file ("hosts") on node node1 ... done

 

Log file - /var/cluster/logs/install/scinstall.log.6845

 

 

Rebooting ...

 

updating /platform/i86pc/boot_archive...this may take a minute

 

同样的补助把节点3 ismp-sun3加入到cluster中

 

 

 

 

创建metadiskset及本机的系统盘镜像

Ismp-sun1 系统盘的镜像

系统盘的did设备分配

1d200 d300 d400   分给diskset

2d110-d150 分给ismp-sun1

3d210-d250 分给ismp-sun1

4: d310-d350 分给ismp-sun3

 

metainit    -f d110 1 1 c0t0d0s0

metainit    -f d111 1 1 c0t0d0s1

metainit    -f d113 1 1 c0t0d0s3

metainit    -f d114 1 1 c0t0d0s4

metainit    -f d115 1 1 c0t0d0s5

             

             

metainit    -f d120 1 1 c0t1d0s0

metainit    -f d121 1 1 c0t1d0s1

metainit    -f d123 1 1 c0t1d0s3

metainit    -f d124 1 1 c0t1d0s4

metainit    -f d125 1 1 c0t1d0s5

 

metainit d130 -m d110

metainit d131 -m d111

metainit d133 -m d113

metainit d134 -m d114

metainit d135 -m d115

 

 

metaroot d130

lockfs fa

修改vfstab表为

/dev/md/dsk/d131        -       -       swap    -       no      -

/dev/md/dsk/d130        /dev/md/rdsk/d130       /       ufs     1       no     

-

/dev/md/dsk/d134        /dev/md/rdsk/d134       /export/home    ufs     2      

yes     -

#/dev/dsk/c0t0d0s5      /dev/rdsk/c0t0d0s5      /globaldevices  ufs     2      

yes     -

/dev/md/dsk/d133        /dev/md/rdsk/d133       /opt    ufs     2       yes    

-

/devices        -       /devices        devfs   -       no      -

ctfs    -       /system/contract        ctfs    -       no      -

objfs   -       /system/object  objfs   -       no      -

swap    -       /tmp    tmpfs   -       yes     -

/dev/md/dsk/d135       /dev/md/rdsk/d135        /global/.devices/node@1 ufs 2 no

 Global

 

系统重新启动后做镜像。

Metattach d100 d120

Metattach d101 d121

Metattach d105 d125

Metattach d106 d126

 

Ismp-sun2 ismp-sun3 做同样操作

 

 

Metaset的配置步奏如下:

Oracle metaset

1:metaset s dg_or11  a h ismp-sun3 ismp-sun2 先创建空的diskset

2:metaset s dg_ora1  a /dev/did/rdsk/d8s0 (d9s0 d10s0 d11s0 d12s0 d24s0 )

3:metainit s  dg_ora1  d300 1 1 /dev/did/rdsk/d11s0

  metainit s  dg_ora1  d200 1 1 /dev/did/rdsk/d12s0

  metainit s  dg_ora1  d500 4 1 /dev/did/rdsk/d8s0 /dev/did/rdsk/d9s0 /dev/did/rdsk/d10s0  /dev/did/rdsk/d12s0

   在做软分区metainit s dg_ora1  d501 p d500 10g ……..

 

4:newfs /dev/md/dg_ora1/rdsk/d300

5:metaset s dg_ora1 a m ismp-sun2,ismp-sun3

 

Service metaset

1:metaset s dg_cca h ismp-sun1 ismp-sun2

2:metaset s dg_cc a /dev/did/rdsk/d6s0 /dev/did/rdsk/d7s0

3:metainit s dg_cc d100 2 1 /dev/did/rdsk/d6s0 /dev/did/rdsk/d7s0

4:newfs /dev/md/dg_cc/rdsk/d100

5:metaset s dg_cc a m ismp-sun1 ismp-sun2

 

10、注册并配置rg_ora

 

root@dbserver1 # scrgadm -a -t SUNW.oracle_server

 

root@dbserver1 # scrgadm -a -t SUNW.oracle_listener

 

root@dbserver1 # scrgadm -a -g rg_ora -h ismp-sun3,ismp-sun2 创建资源组

 

root@dbserver1 # scrgadm -a -L -g rg_ora  -l cluster_ora    创建浮动IP

 

安装mount_point:

root@dbserver1 # scrgadm -a -t SUNW.HAStoragePlus

 

root@dbserver1 # scrgadm -a -j rg_ora_1 -g oracle-rg -t SUNW.HAStoragePlus\

root@dbserver1> -x FilesystemMountPoints=/ora_backup

root@dbserver1> -x FilesystemMountPoints= /ora_arch

root@dbserver1 > -x AffinityOn=true

 

配置oracle_server

root@JSJY01#scrgadm -a –j rg_ora_server –t SUNW.oracle_server \

-g"<root@JSJY01#>-g oracle-rg

-x"<root@JSJY01#>-x ORACLE_HOME= /export/home/oracle/oracle92

-x"<root@JSJY01#>-x ORACLE_SID= hbismp

-x"<root@JSJY01#>-x Alert_log_file= /export/home/oracle/oracle92/admin/hbismp/bdump/ alert_hbismp.log

-x"<root@JSJY01#>-x resource_dependencies=ora-server-mount

 

配置oracle_listener:

root@dbserver1 # scrgadm -a -j rg_ora_lisn -t SUNW.oracle_listener \

root@dbserver1 > -g oracle-rg \

root@dbserver1 > -x ORACLE_HOME=/export/home/oracle/oracle92

root@dbserver1 > -x LISTENER_NAME=LISTENER \

 

 

 

 

root@dbserver1 # scswitch -Z -g oracle2-rg

11、测试和维护oracle-rg

root@dbserver1 #scstat –p     *查看当前资源状况

root@dbserver1 #scswitch –F –g rg_oar  *rg_ora 在两个节点上offline

root@dbserver1 #scswitch –Z –g rg_ora          *oracle-rg缺省在ismp-sun3online

root@dbserver1 #scswitch –z –g rg_ora –h ismp-sun3 *rg_ora切换到ismp-sun3

:

root@ismp-sun3 # scstat

------------------------------------------------------------------

 

-- Cluster Nodes --

 

                    Node name           Status

                    ---------           ------

  Cluster node:     ismp-sun1           Online

  Cluster node:     ismp-sun2           Online

  Cluster node:     ismp-sun3           Online

 

------------------------------------------------------------------

 

-- Cluster Transport Paths --

 

                    Endpoint               Endpoint               Status

                    --------               --------               ------

  Transport path:   ismp-sun1:bge3         ismp-sun2:bge3         Path online

  Transport path:   ismp-sun1:bge2         ismp-sun2:bge2         Path online

  Transport path:   ismp-sun1:bge3         ismp-sun3:bge3         Path online

  Transport path:   ismp-sun1:bge2         ismp-sun3:bge2         Path online

  Transport path:   ismp-sun2:bge3         ismp-sun3:bge3         Path online

  Transport path:   ismp-sun2:bge2         ismp-sun3:bge2         Path online

 

------------------------------------------------------------------

 

-- Quorum Summary --

 

  Quorum votes possible:      5

  Quorum votes needed:        3

  Quorum votes present:       5

 

 

-- Quorum Votes by Node --

 

                    Node Name           Present Possible Status

                    ---------           ------- -------- ------

  Node votes:       ismp-sun1           1        1       Online

  Node votes:       ismp-sun2           1        1       Online

  Node votes:       ismp-sun3           1        1       Online

 

 

-- Quorum Votes by Device --

 

                    Device Name         Present Possible Status

                    -----------         ------- -------- ------

  Device votes:     /dev/did/rdsk/d23s2 2        2       Online

 

------------------------------------------------------------------

 

-- Device Group Servers --

 

                         Device Group        Primary             Secondary

                         ------------        -------             ---------

  Device group servers:  rmt/1               -                   -

  Device group servers:  rmt/2               -                   -

  Device group servers:  rmt/3               -                   -

  Device group servers:  dg_cc               ismp-sun1           ismp-sun3

  Device group servers:  dg_ora1             ismp-sun3           ismp-sun2

 

 

-- Device Group Status --

 

                              Device Group        Status             

                              ------------        ------             

  Device group status:        rmt/1               Offline

  Device group status:        rmt/2               Offline

  Device group status:        rmt/3               Offline

  Device group status:        dg_cc               Online

  Device group status:        dg_ora1             Online

 

 

-- Multi-owner Device Groups --

 

                              Device Group        Online Status

                              ------------        -------------

 

------------------------------------------------------------------

 

-- Resource Groups and Resources --

 

            Group Name     Resources

            ----------     ---------

 Resources: rg_cc          rg_cc_1 cluster_cc asb_cc

 Resources: rg_ora         cluster_ora rg_ora_1 rg_ora_2 rg_ora_server rg_ora_lisn

 

 

-- Resource Groups --

 

            Group Name     Node Name                State          Suspended

            ----------     ---------                -----          ---------

     Group: rg_cc          ismp-sun1                Online         No

     Group: rg_cc          ismp-sun2                Offline        No

 

     Group: rg_ora         ismp-sun2                Offline        No

     Group: rg_ora         ismp-sun3                Online         No

 

 

-- Resources --

 

            Resource Name  Node Name                State          Status Message

            -------------  ---------                -----          --------------

  Resource: rg_cc_1        ismp-sun1                Online         Online

  Resource: rg_cc_1        ismp-sun2                Offline        Offline

 

  Resource: cluster_cc     ismp-sun1                Online         Online - LogicalHostname online.

  Resource: cluster_cc     ismp-sun2                Offline        Offline

 

  Resource: asb_cc         ismp-sun1                Online         Online

  Resource: asb_cc         ismp-sun2                Offline        Offline

 

  Resource: cluster_ora    ismp-sun2                Offline        Offline

  Resource: cluster_ora    ismp-sun3                Online         Online - LogicalHostname online.

 

  Resource: rg_ora_1       ismp-sun2                Offline        Offline

  Resource: rg_ora_1       ismp-sun3                Online         Online

 

  Resource: rg_ora_2       ismp-sun2                Offline        Offline

  Resource: rg_ora_2       ismp-sun3                Online         Online

 

  Resource: rg_ora_server  ismp-sun2                Offline        Offline

  Resource: rg_ora_server  ismp-sun3                Online         Online

 

  Resource: rg_ora_lisn    ismp-sun2                Offline        Offline

  Resource: rg_ora_lisn    ismp-sun3                Online         Online

 

------------------------------------------------------------------

 

-- IPMP Groups --

 

              Node Name           Group   Status         Adapter   Status

              ---------           -----   ------         -------   ------

  IPMP Group: ismp-sun1           sc_ipmp0 Online         bge1      Offline

  IPMP Group: ismp-sun1           sc_ipmp0 Online         bge0      Online

 

  IPMP Group: ismp-sun2           sc_ipmp0 Online         bge1      Offline

  IPMP Group: ismp-sun2           sc_ipmp0 Online         bge0      Online

 

  IPMP Group: ismp-sun3           sc_ipmp0 Online         bge1      Offline

  IPMP Group: ismp-sun3           sc_ipmp0 Online         bge0      Online

 

------------------------------------------------------------------

You have new mail in /var/mail/root

root@ismp-sun3 # ifconfig -a

lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1

        inet 127.0.0.1 netmask ff000000

bge0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2

        inet 192.168.2.23 netmask ffffff00 broadcast 192.168.2.255

        groupname sc_ipmp0

        ether 0:14:4f:ea:e2:22

bge0:1: flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu 1500 index 2

        inet 192.168.2.123 netmask ffffff00 broadcast 192.168.2.255

bge0:2: flags=1040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4> mtu 1500 index 2

        inet 192.168.2.29 netmask ffffff00 broadcast 192.168.2.255

bge1: flags=39040803<UP,BROADCAST,MULTICAST,DEPRECATED,IPv4,NOFAILOVER,FAILED,STANDBY> mtu 1500 index 3

        inet 192.168.2.223 netmask ffffff00 broadcast 192.168.2.255

        groupname sc_ipmp0

        ether 0:14:4f:ea:e2:23

bge2: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 5

        inet 172.16.0.131 netmask ffffff80 broadcast 172.16.0.255

        ether 0:14:4f:ea:cf:a0

bge3: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 4

        inet 172.16.1.3 netmask ffffff80 broadcast 172.16.1.127

        ether 0:14:4f:ea:cf:a1

clprivnet0: flags=1009843<UP,BROADCAST,RUNNING,MULTICAST,MULTI_BCAST,PRIVATE,IPv4> mtu 1500 index 6

        inet 172.16.4.3 netmask fffffe00 broadcast 172.16.5.255

        ether 0:0:0:0:0:3

sppp0: flags=10010008d1<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST,IPv4,FIXEDMTU> mtu 1500 index 7

        inet 192.168.100.2 --> 192.168.100.1 netmask ffffff00

        ether 0

 

 

 

备注:

1:在配置构成中遇到问题可执行如下步奏回退:

Document ID:50093

This procedure should be followed for removing Sun Cluster 3.0 Update 3 (SC3.0 U3) when resource groups are already configured. This procedure is handy when a reinstall of Sun Cluster is needed, but a reload of the operating system is not desired.

1) Offline all resource groups (RGs):

# scswitch -F -g <resource_group> [,<resource_group>...]      

2) Disable all configured resources:

# scswitch -n -j <resource-name> [,<resource-name>...]      

3) Remove all resources from the resource group:

# scrgadm -r -j <resource-name>      

4) Remove the now empty resource groups:

# scrgadm -r -g <resource_group>      

5) Edit the /etc/vfstab file and remove all "global device mounts" but do not remove "/node@nodeid" mount options.

6) Remove all device groups:

# scstat -D (to get a list of device groups)
# scswitch -F -D device-group-name (to offline device-group)
# scconf -r -D name=device-group-name (to remove/unregister)
      

NOTE: If there are any "rmt" devices, they must be removed with the command:

# /usr/cluster/dtk/bin/dcs_config -c remove -s rmt/1      

This is assumes that you have the package "SUNWscdtk". If you do not, you will need to install it in order to remove the rmt/XX entries, or the "scinstall -r" will fail.

7) Uninstall the SC 3.0 software:

#scinstall -r      

                                           

 

2:建议在安装完系统后,做系统备份。

 

 

 

 

 

 

 

 

 

 

 

 

 

维护文档:

 

启动rg_ora 资源组

scswitch -Z -g rg_ora

启动rg_cc资源组

scswitch -Z -g rg_cc

 

停止资源组

Scswitch  -F –g  rename  (rgname 是你的资源组名字)

资源组切换

scswitch -z -g rg_ora  -h ismp-sun3   **  rg_ora 切换到sun3节点上

scswitch -z -g rg_ora  -h ismp-sun2   **  rg_ora 切换到sun2节点上

 

 

scswitch -z -g rg_cc  -h ismp-sun1   **  rg_cc 切换到sun1节点上

scswitch -z -g rg_cc  -h ismp-sun2   **  rg_cc 切换到sun2节点上

 

 

  评论这张
 
阅读(1504)| 评论(0)
推荐 转载

历史上的今天

在LOFTER的更多文章

评论

<#--最新日志,群博日志--> <#--推荐日志--> <#--引用记录--> <#--博主推荐--> <#--随机阅读--> <#--首页推荐--> <#--历史上的今天--> <#--被推荐日志--> <#--上一篇,下一篇--> <#-- 热度 --> <#-- 网易新闻广告 --> <#--右边模块结构--> <#--评论模块结构--> <#--引用模块结构--> <#--博主发起的投票-->
 
 
 
 
 
 
 
 
 
 
 
 
 
 

页脚

网易公司版权所有 ©1997-2017