注册 登录  
 加关注
   显示下一条  |  关闭
温馨提示!由于新浪微博认证机制调整,您的新浪微博帐号绑定已过期,请重新绑定!立即重新绑定新浪微博》  |  关闭

欢迎光临我的博客

 
 
 

日志

 
 

install Veritas Cluster Server 4.0  

2009-01-02 17:22:46|  分类: varitas |  标签: |举报 |字号 订阅

  下载LOFTER 我的照片书  |

安装Veritas Cluster Server 4.0

 

  硬件简述:

  1. Node1: Netra 20 (2 X UltraSPARC-III+, 2048M RAM, 72G*2 HardDisk)

  2. Node2: Netra 20 (2 X UltraSPARC-III+, 2048M RAM, 72G*2 HardDisk)

  3. Shared Storage: D1000 (36G*3 HardDisk) 一. 安装操作系统

  安装2/04版本(2004年2月份)的Solaris 8。在安装过程中需要选择英文作为主要语言。

  二. 安装EIS-CD

  安装EIS-CD 2/04版本,EIS-CD用于设置使用cluster的环境变量。

  三. 安装patch

  为了避免CPU虚高的问题,需要安装117000-05补丁。该补丁可以从SUN公司官方网站下载。下载该补丁以后解压将生成117000-05目录。使用如下命令安装patch:

  patchadd 117000-05

  四. 安装共享磁盘

  在本次环境中,我们使用SUND1000盘阵作为共享磁盘,是SCSI接口的盘阵,如果是光纤接口的盘阵需要另行设置。

  1. 给盘阵加电

  2. 将Node1和盘阵用SCSI连线连接

  3. Node1加电

  4. 使Node1进入ok模式(在console窗口监控Node1的启动,当出现启动信息时,迅速按下Ctrl+Break,即可进入ok模式)

  5. {0} ok probe-scsi-all

  6. {0} ok boot ?Cr

  7. 重新启动以后进入操作系统,使用format命令确认盘阵已经被此系统加载

  8. Node1断电,Node2加电,重复4-7步,确认盘阵也可以被此台机器加载

  9. 为了使两台机器同时读取共享存储,需要修改其中一台机器的SCSI ID。给Node1加电,进入ok模式(此时的状态应该是Node1,Node2,存储均已加电,Node2目前使用format命令已经可以观察到盘阵被正常加载,Node1处于ok模式)。

  10. 设置Node1的SCSI ID为5(默认为7)

  {0} ok setenv scsi-initiator-id 5

  11. 通过第5步中的屏幕信息我们可以知道该SCSI设备的系统表示符为/pci@8,700000/scsi@2,1,这个标识符将在下一步使用

  12. 设置Node1的SCSI ID为5。(注意第2行的输入中" scsi-initiator-id",双引号之后有一个空格!)

  {0} ok nvedit

  0: probe-all

  1: cd /pci@8,700000/scsi@2,1

  2: 5 " scsi-initiator-id" integer-property

  3: device-end

  4: install-console

  5: banner

  13. 保存第12步的配置

  {0} ok nvstore

  14. 设置环境变量

  {0} ok setenv use-nvramrc? true

  use-nvramrc? = true

  {0} ok setenv auto-boot? true

  auto-boot? = true

  15. 重新启动Node1

  {0} ok reset-all

  16. 在Node1,Node2上均用format命令检查,确认都成功挂载了共享存储

  五. 安装VCS

  1. 设置rhosts文件,在两台机器上都要执行,以下以Node1为例。

  在/目录下添加rhosts文件,使远程登陆生效,用以简便地在两台机器上同时安装VCS

  root@uulab-s22 # echo “+” > /.rhosts

  root@uulab-s22 # more /.rhosts

  +

  2. 设置多个网卡IP

  由于Cluster系统中的心跳网卡最好是一个独立的网络设备,所以我们在两台机器上设置另外一个网卡和IP地址,专门用于心跳设备。在两台机器上都要执行,以下以Node1为例。

  root@uulab-s22 # echo "uulab-p22" > /etc/hostname.qfe0

  root@uulab-s22 # touch /etc/notrouter

  root@uulab-s22 # vi /etc/hosts

  添加如下行:

  192.168.0.6 uulab-p22

  192.168.0.8 uulab-p23

  root@uulab-s22 # vi /etc/netmasks

  添加如下行:

  192.168.0.0 255.255.255.0

  root@uulab-s22 # sync

  root@uulab-s22 # reboot

  3. 开始安装

  VCS 4.0的安装已经可以简化到只使用一个命令,将同时安装Veritas Volume Manager 4.0, Veritas File System 4.0,Veritas Cluster Server 4.0以及其他一些相关软件。以下命令只需要在一个节点中执行即可。

  root@uulab-s22 # cd /opt/sf_ha.4.0.sol/storage_foundation

  root@uulab-s22 # ./installsf

  安装过程一路都是选择或者按照提示输入,总的来说比较简单,不再赘述。

  六. 创建磁盘组

  VCS安装完毕以后,将要求使用如下命令重新启动所有节点:

  shutdown -y -i6 -g0

  重启完毕,开始创建磁盘组。以下命令只需要在一个节点中执行即可。

  使用format命令确认我们需要添加到磁盘组中的共享磁盘为c4t0d0,c4t8d0,c4t9d0

  root@uulab-s22 # format

  Searching for disks...done

  AVAILABLE DISK SELECTIONS:

  0. c1t0d0

  /pci@8,600000/SUNW,qlc@4/fp@0,0/ssd@w2100000c50569190,0

  1. c1t1d0

  /pci@8,600000/SUNW,qlc@4/fp@0,0/ssd@w2100000c5056c1a7,0

  2. c4t0d0

  /pci@8,700000/scsi@2,1/sd@0,0

  3. c4t8d0

  /pci@8,700000/scsi@2,1/sd@8,0

  4. c4t9d0

  /pci@8,700000/scsi@2,1/sd@9,0

  Specify disk (enter its number):

  root@uulab-s22 # vxdisksetup -i c4t0d0

  如果此命令执行以后报错如下,那么参看附录中的解决方法1。

  VxVM vxdisksetup ERROR V-5-2-3535 c4t0d0s2: Invalid dmpnodename for disk device c4t0d0.

  如果此命令执行以后报错如下,那么参看附录中的解决方法2。

  VxVM vxdisksetup ERROR V-5-2-1813 c4t0d0: Disk is part of ipasdg disk group, use -f option to force setup.

  root@uulab-s22 # vxdisksetup -i c4t8d0

  root@uulab-s22 # vxdisksetup -i c4t9d0

  root@uulab-s22 # vxdg init hlrdg hlrdg-01=c4t0d0

  root@uulab-s22 # vxdg -g hlrdg adddisk hlrdg-02=c4t8d0

  root@uulab-s22 # vxdg -g hlrdg adddisk hlrdg-03=c4t9d0

  七. 创建卷

  root@uulab-s22 # vxassist -g hlrdg -b make oradata_vol 15g layout=nostripe,nolog nmirror=2 &

  root@uulab-s22 # vxassist -g hlrdg -b make oraredo_vol 5g layout=nostripe,nolog nmirror=2 &

  root@uulab-s22 # vxassist -g hlrdg -b make oraarch_vol 8g layout=nostripe,nolog nmirror=2 &

  root@uulab-s22 # vxassist -g hlrdg -b make hlr_vol 4g layout=nostripe,nolog nmirror=2 &

  八. 使用VxFS创建文件系统

  root@uulab-s22 # mkfs -F vxfs -o bsize=8192,largefiles /dev/vx/rdsk/hlrdg/oradata_vol

  root@uulab-s22 # mkfs -F vxfs -o bsize=8192,largefiles /dev/vx/rdsk/hlrdg/oraredo_vol

  root@uulab-s22 # mkfs -F vxfs -o bsize=8192,largefiles /dev/vx/rdsk/hlrdg/oraarch_vol

  root@uulab-s22 # mkfs -F vxfs -o bsize=8192,largefiles /dev/vx/rdsk/hlrdg/hlr_vol

  九. 配置VCS

  我们的环境是创建了4个文件系统,分别是oradata_vol, oraredo-vol, oraarch_vol, hlr_vol,文件系统可以被使用需要挂载(mount)到相应的目录中。先创建相应的目录,在两台机器上都要执行,以下以Node1为例。(假设系统中dba组和oracle用户都已经创建)

  root@uulab-s22 # mkdir -p /opt/oracle/data

  root@uulab-s22 # mkdir -p /opt/oracle/redo

  root@uulab-s22 # mkdir -p /opt/oracle/arch

  root@uulab-s22 # mkdir ?Cp /opt/hlr

  root@uulab-s22 # chown oracle:dba /dev/vx/rdsk/hlrdg/oradata_vol

  root@uulab-s22 # chown oracle:dba /dev/vx/rdsk/hlrdg/oraredo_vol

  root@uulab-s22 # chown oracle:dba /dev/vx/rdsk/hlrdg/oraarch_vol

  root@uulab-s22 # chown oracle:dba /opt/oracle/data

  root@uulab-s22 # chown oracle:dba /opt/oracle/redo

  root@uulab-s22 # chown oracle:dba /opt/oracle/arch

  修改/etc/VRTSvcs/conf/config/main.cf文件,这是VCS的配置文件,该文件的修改可以使用命令行(比如hagrp,hares等命令)修改,也可以使用任何文本编辑器(比如vi)直接修改,此处我们选择使用vi进行修改。粗体字部分为需要新添加的行。

  修改完毕以后该文件如下:

  include "types.cf"

  cluster vcs_hlr_cluster (

  UserNames = { admin = hijBidIfjEjjHrjDig }

  ClusterAddress = "10.7.1.7" --此处为Cluster环境的虚拟IP

  Administrators = { admin }

  CounterInterval = 5

  )

  system uulab-s22 (

  )

  system uulab-s23 (

  )

  group ClusterService (

  SystemList = { uulab-s22 = 0, uulab-s23 = 1 }

  UserStrGlobal = "LocalCluster@https://10.7.1.7:8443;LocalCluster@https://10.7.1.7:8443;"

  AutoStartList = { uulab-s22, uulab-s23 }

  OnlineRetryLimit = 3

  OnlineRetryInterval = 120

  )

  DiskGroup hlrdg (

  DiskGroup = hlrdg

  MonitorReservation = 1

  )

  IP webip (

  Device = eri0

  Address = "10.7.1.7"

  NetMask = "255.255.0.0"

  )

  Mount arch_mnt (

  MountPoint = "/opt/oracle/arch"

  BlockDevice = "/dev/vx/dsk/hlrdg/oraarch_vol"

  FSType = vxfs

  FsckOpt = "-y"

  )

  Mount data_mnt (

  MountPoint = "/opt/oracle/data"

  BlockDevice = "/dev/vx/dsk/hlrdg/oradata_vol"

  FSType = vxfs

  FsckOpt = "-y"

  )

  Mount hlr_mnt (

  MountPoint = "/opt/hlr"

  BlockDevice = "/dev/vx/dsk/hlrdg/hlr_vol"

  FSType = vxfs

  FsckOpt = "-y"

  )

  Mount redo_mnt (

  MountPoint = "/opt/oracle/redo"

  BlockDevice = "/dev/vx/dsk/hlrdg/oraredo_vol"

  FSType = vxfs

  FsckOpt = "-y"

  )

  NIC csgnic (

  Device = eri0

  )

  VRTSWebApp VCSweb (

  Critical = 0

  AppName = vcs

  InstallDir = "/opt/VRTSweb/VERITAS"

  TimeForOnline = 5

  RestartLimit = 3

  )

  VCSweb requires webip

  arch_mnt requires hlr_mnt

  data_mnt requires redo_mnt

  hlr_mnt requires hlrdg

  redo_mnt requires arch_mnt

  webip requires csgnic

  修改完毕,使用如下命令进行语法检查:

  root@uulab-s22 # hacf -verify /etc/VRTSvcs/conf/config

  十. 测试VCS

  配置完成以后,同时重新启动两台机器,启动完毕,使用如下命令检查VCS是否运行正常。

  root@uulab-s22 # hares -display -group ClusterService

  正常情况应该如下显示:

  #Resource Attribute System Value

  VCSweb Group global ClusterService

  VCSweb Type global VRTSWebApp

  VCSweb AutoStart global 1

  VCSweb Critical global 0

  VCSweb Enabled global 1

  VCSweb LastOnline global uulab-s22

  VCSweb MonitorOnly global 0

  VCSweb ResourceOwner global unknown

  VCSweb TriggerEvent global 0

  VCSweb ArgListValues uulab-s22 vcs /opt/VRTSweb/VERITAS 5

  VCSweb ArgListValues uulab-s23 vcs /opt/VRTSweb/VERITAS 5

  VCSweb ConfidenceLevel uulab-s22 100

  VCSweb ConfidenceLevel uulab-s23 0

  VCSweb Flags uulab-s22

  VCSweb Flags uulab-s23

  VCSweb IState uulab-s22 not waiting

  VCSweb IState uulab-s23 not waiting

  VCSweb Probed uulab-s22 1

  VCSweb Probed uulab-s23 1

  VCSweb Start uulab-s22 1

  VCSweb Start uulab-s23 0

  VCSweb State uulab-s22 ONLINE

  VCSweb State uulab-s23 OFFLINE

  VCSweb AppName global vcs

  VCSweb ComputeStats global 0

  VCSweb InstallDir global /opt/VRTSweb/VERITAS

  VCSweb ResourceInfo global State Valid Msg TS

  VCSweb RestartLimit global 3

  VCSweb TimeForOnline global 5

  VCSweb MonitorTimeStats uulab-s22 Avg 0 TS

  VCSweb MonitorTimeStats uulab-s23 Avg 0 TS

  #

  …………

  也可以使用df命令检查是否所有需要挂载的文件系统都已正确挂载在Node1上,而在Node2上则无法看到。

  做切换测试:

  root@uulab-s22 # hagrp -switch ClusterService -to uulab-s23

  正常情况大概是5秒左右,所有的资源包括IP,文件系统加载点都会转移到Node2上。

  十一. 附录

  解决方法1:

  root@uulab-s22 # vxdiskadm

  Volume Manager Support Operations

  Menu: VolumeManager/Disk

  1 Add or initialize one or more disks

  2 Encapsulate one or more disks

  3 Remove a disk

  4 Remove a disk for replacement

  5 Replace a failed or removed disk

  6 Mirror volumes on a disk

  7 Move volumes from a disk

  8 Enable access to (import) a disk group

  9 Remove access to (deport) a disk group

  10 Enable (online) a disk device

  11 Disable (offline) a disk device

  12 Mark a disk as a spare for a disk group

  13 Turn off the spare flag on a disk

  14 Unrelocate subdisks back to a disk

  15 Exclude a disk from hot-relocation use

  16 Make a disk available for hot-relocation use

  17 Prevent multipathing/Suppress devices from VxVM's view

  18 Allow multipathing/Unsuppress devices from VxVM's view

  19 List currently suppressed/non-multipathed devices

  20 Change the disk naming scheme

  21 Get the newly connected/zoned disks in VxVM view

  22 Change/Display the default disk layouts

  23 Mark a disk as allocator-reserved for a disk group

  24 Turn off the allocator-reserved flag on a disk

  list List disk information

  ? Display help about menu

  ?? Display help about the menuing system

  q Exit from menus

  Select an operation to perform: 17

  Exclude Devices

  Menu: VolumeManager/Disk/ExcludeDevices

  VxVM INFO V-5-2-1239

  This operation might lead to some devices being suppressed from VxVM's view

  or prevent them from being multipathed by vxdmp (This operation can be

  reversed using the vxdiskadm command).

  Do you want to continue ? [y,n,q,?] (default: y) y

  Volume Manager Device Operations

  Menu: VolumeManager/Disk/ExcludeDevices

  1 Suppress all paths through a controller from VxVM's view

  2 Suppress a path from VxVM's view

  3 Suppress disks from VxVM's view by specifying a VID:PID combination

  4 Suppress all but one paths to a disk

  5 Prevent multipathing of all disks on a controller by VxVM

  6 Prevent multipathing of a disk by VxVM

  7 Prevent multipathing of disks by specifying a VID:PID combination

  8 List currently suppressed/non-multipathed devices

  ? Display help about menu

  ?? Display help about the menuing system

  q Exit from menus

  Select an operation to perform: 5

  Exclude controllers from DMP

  Menu: VolumeManager/Disk/ExcludeDevices/CTLR-DMP

  Use this operation to exclude all disks on a controller from being multipathed

  by vxdmp.

  As a result of this operation, all disks having a path through the specified

  controller will be claimed in the OTHER_DISKS category and hence, not

  multipathed by vxdmp. This operation can be reversed using the vxdiskadm

  command.

  VxVM INFO V-5-2-1263

  You can specify a controller name at the prompt. A controller name is of

  the form c#, example c3, c11 etc. Enter 'all' to exclude all paths on all

  the controllers on the host. To see the list of controllers on the system,

  type 'list'.

  Enter a controller name [,all,list,list-exclude,q,?] c4

  VxVM INFO V-5-2-1129

  All disks on the following enclosures will be excluded from DMP ( ie

  claimed in the OTHER_DISKS category and hence not multipathed by vxdmp) as a

  result of this operation :

  Disk OTHER_DISKS

  Continue operation? [y,n,q,?] (default: y) y

  Do you wish to exclude more controllers ? [y,n,q,?] (default: n) n

  Volume Manager Device Operations

  Menu: VolumeManager/Disk/ExcludeDevices

  1 Suppress all paths through a controller from VxVM's view

  2 Suppress a path from VxVM's view

  3 Suppress disks from VxVM's view by specifying a VID:PID combination

  4 Suppress all but one paths to a disk

  5 Prevent multipathing of all disks on a controller by VxVM

  6 Prevent multipathing of a disk by VxVM

  7 Prevent multipathing of disks by specifying a VID:PID combination

  8 List currently suppressed/non-multipathed devices

  ? Display help about menu

  ?? Display help about the menuing system

  q Exit from menus

  Select an operation to perform: q

  VxVM vxdiskadm NOTICE V-5-2-1187 Please wait while the device suppression/unsuppression operations take effect.

  评论这张
 
阅读(563)| 评论(0)
推荐 转载

历史上的今天

在LOFTER的更多文章

评论

<#--最新日志,群博日志--> <#--推荐日志--> <#--引用记录--> <#--博主推荐--> <#--随机阅读--> <#--首页推荐--> <#--历史上的今天--> <#--被推荐日志--> <#--上一篇,下一篇--> <#-- 热度 --> <#-- 网易新闻广告 --> <#--右边模块结构--> <#--评论模块结构--> <#--引用模块结构--> <#--博主发起的投票-->
 
 
 
 
 
 
 
 
 
 
 
 
 
 

页脚

网易公司版权所有 ©1997-2017