giovedì 7 maggio 2009

Major difference between Linux and Solaris

Introduction


Most readers coming here are more familiar with Linux than Solaris. This little page is designed to give those users some tips for running a Solaris box.
It is in no way intended as a definitive guide, and there may be lot better ways of doing things. Feel free to contribute.

ps

ps -ef on Linux will give the full command line, but with Solaris you don't get it all.
This is particularly critical when looking at java processes.
Fortunately Solaris retains the BSD style binaries in /usr/ucb, so execute:


 /usr/ucb/ps wwaux|grep java


instead


bash

/bin/sh on Solaris is a POSIX complaint bourne shell. If you have written bash-centric scripts, replace #!/bin/sh with #!/bin/bash

NFS

Starting of NFS daemon

Instead of

# service nfsserver start


one does:

# svcadm enable network/nfs/server

Exports

Instead of the file /etc/exports, under Solaris the file /etc/dfs/dfstab needs to be edited.

Sharing with zfs

Nowadays, you would probably use zfs sharenfs eg:



# zfs set sharenfs=on zpool/sharedfolder
# zfs sharenfs=rw=server.fqdn.ch,rw=otherserver.fqdn.ch zpool/sharedfolder


An attempt at a matrix





























































Linux command Solaris similar command Comments
top prstat You can compile top on Solaris, but you can't rely on it's accuracy
free vmstat  
cat /proc/meminfo prtconf | grep Memory  
cat /proc/cpuinfo psrinfo -v you can also use prtconf
netstat -p lsof -i lsof is not a default command. You need the package
uname -a isainfo -b to determine how many bits your os is
java -d64 java -d64 Start java with a default of 64 bit. Linux seems to do this by default though.
cat /etc/redhat-release cat /etc/release
  sysdef sysdef holds a lot of system info including kernel tunables
lsmod modinfo  
strace truss  

Another matrix




RHEL
Solaris


Shutdown


shutdown -h now (or) poweroff
shutdown -y -g0 -i5 (or) init 5


reboot
reboot (or) shutdown -y -g0 -i6 (or) init 6


halt
halt


Kernel


/sbin/lsmod
modinfo


/sbin/insmod
modload


/sbin/rmmod
modunload


scanpci
/usr/X11/bin/scanpci (or) prtconf -v


Printing


lp (or) lpr
lp (or /usr/ucb/lpr)


lpstat (or) lpq
lpstat (or /usr/ucb/lpq)


Services


/sbin/service --status-all
svcs -a



/sbin/service sendmail stop
svcadm disable sendmail


/sbin/service sendmail start
svcadm enable sendmail



/sbin/service sendmail status
svcs sendmail


/sbin/chkconfig --list
svcs -a


/sbin/chkconfig --add /etc/rc3.d/f00

svccfg import f00.xml


/sbin/chkconfig sendmail on
svcadm enable sendmail


Monitoring



top
prstat


cat /proc/cpuinfo
psrinfo -v



cat /proc/meminfo
prtconf


NFS


exportfs
exportfs (or) share



(edit /etc/exports)
share /home (or) zfs sharenfs=on


(edit /etc/exports) unshare /home (or) zfs sharenfs=off


Networking



/sbin/mii-tool
ndd (or) /sbin/dladm show-dev


ifconfig
ifconfig -a



/sbin/ethtool
ndd


/sbin/dhclient
dhcpagent



iptables
ipfilter


Storage


fdisk
fdisk (and) format



parted
format


mkfs -t ext3 /dev/hda1
mkfs -F ufs /dev/rdsk/c0t0d0s0 (or) newfs /dev/rdsk/c0t0d0s0



cdrecord dev=2,0 f00.iso cdrw -i f00.iso


tar xfvj f00.tar.bz2
gtar xfvj f00.tar.bz2


lvm/pv*/lv*/vg*

meta*


Dev


(edit /etc/ld.so.conf)
crle



gcc
/opt/csw/bin/gcc


ld
/usr/ccs/bin/ld

mercoledì 6 maggio 2009

Configurando Sun Cluster 3.2 con Oracle 10g RAC y ASM

Diego E. Aguirre, octubre 2008



Esta instalación fue realizada en el siguiente escenario:


2 Dominios físicos de two servidores Sun Fire 25K, cada uno con:

  • 64 Gbyte de RAM
  • 8 UltraSPARC IV+ CPU dual-core de 1.8 MHz
  • 4 discos internos
  • Conectado a Storage EMC mediante una SAN (SAN EMC Symmetrix DMX3)
  • Solaris 10 08/07 release plus OEM
  • Base de Datos Oracle versión 10.2.0.3
  • Veritas Storage Foundation 5.0 MP1 by Symantec

Importante: Verificar que no este el Patch 126106-13 instalado ya que existe un bug con recursos escalables a la hora de la registración de los mismos.



Requerimientos de Solaris:

Tener instalados los paquetes de la Distribución Full Plus OEM.

Crear una partición de 512 Mbyte en el slice 6 del disco de S.O. y
montarla como /globaldevices.

Asegurarse de tener instalados todos los paquetes y patches de la SAN Foundation Suite.

Asegurarse que la property local_only del rpcbind este seteada en false.



nodo1
# svcprop network/rpc/bind:default|grep local_
config/local_only boolean false

Si no esta seteado en false, ejecutar:




svccfg
svc:>select network/rpc/bind
svc:/network/rpc/bind> setprop config/local_only=false
svc:/network/rpc/bind> quit

svcadm refresh network/rpc/bind:default

En el NODO1, comenzar la instalación del Sun Cluster 3.2.




nodo1 # scinstall

*** Main Menu ***

Please select from one of the following (*) options:

* 1) Create a new cluster or add a cluster node
2) Configure a cluster to be JumpStarted from this install server
3) Manage a dual-partition upgrade
4) Upgrade this cluster node
5) Print release information for this cluster node
* ?) Help with menu options
* q) Quit

Option: 1

*** New Cluster and Cluster Node Menu ***

Please select from any one of the following options:

1) Create a new cluster
2) Create just the first node of a new cluster on this machine
3) Add this machine as a node in an existing cluster
?) Help with menu options
q) Return to the Main Menu

Option: 1

*** Create a New Cluster ***

This option creates and configures a new cluster.
You must use the Java Enterprise System (JES) installer to install the
Sun Cluster framework software on each machine in the new cluster
before you select this option.
If the "remote configuration" option is unselected from the JES
installer when you install the Sun Cluster framework on any of the new
nodes, then you must configure either the remote shell (see rsh(1)) or
the secure shell (see ssh(1)) before you select this option. If rsh or
ssh is used, you must enable root access to all of the new member
nodes from this node.

Press Control-d at any time to return to the Main Menu.

Do you want to continue (yes/no) [yes]? yes

>>> Typical or Custom Mode <<<

This tool supports two modes of operation, Typical mode and Custom.
For most clusters, you can use Typical mode. However, you might need
to select the Custom mode option if not all of the Typical defaults
can be applied to your cluster.
For more information about the differences between Typical and Custom
modes, select the Help option from the menu.

Please select from one of the following options:

1) Typical
2) Custom
?) Help
q) Return to the Main Menu

Option [1]: 2

>>> Cluster Name <<<

Each cluster has a name assigned to it. The name can be made up of any
characters other than whitespace. Each cluster name should be unique
within the namespace of your enterprise.

What is the name of the cluster you want to establish [CLUSTER2]?

>>> Cluster Nodes <<<

This Sun Cluster release supports a total of up to 16 nodes.
Please list the names of the other nodes planned for the initial
cluster configuration. List one node name per line. When finished,

type Control-D:

Node name: nodo1
Node name: nodo2

Node name (Control-D to finish): ^D

This is the complete list of nodes:

nodo1
nodo2

Is it correct (yes/no) [yes]? yes

Attempting to contact "nodo2" ... done
Searching for a remote configuration method ... done

The Sun Cluster framework is able to complete the configuration
process without remote shell access.

Press Enter to continue:

>>> Authenticating Requests to Add Nodes <<<

Once the first node establishes itself as a single node cluster, other
nodes attempting to add themselves to the cluster configuration must
be found on the list of nodes you just provided. You can modify this
list by using claccess(1CL) or other tools once the cluster has been
established.
By default, nodes are not securely authenticated as they attempt to
add themselves to the cluster configuration. This is generally
considered adequate, since nodes which are not physically connected to
the private cluster interconnect will never be able to actually join
the cluster. However, DES authentication is available. If DES
authentication is selected, you must configure all necessary
encryption keys before any node will be allowed to join the cluster

(see keyserv(1M), publickey(4)).

Do you need to use DES authentication (yes/no) [no]? no

>>> Network Address for the Cluster Transport <<<

The cluster transport uses a default network address of 172.16.0.0. If
this IP address is already in use elsewhere within your enterprise,
specify another address from the range of recommended private
addresses (see RFC 1918 for details).

The default netmask is 255.255.248.0. You can select another netmask,
as long as it minimally masks all bits that are given in the network
address.

The default private netmask and network address result in an IP
address range that supports a cluster with a maximum of 64 nodes and
10 private networks.

Is it okay to accept the default network address (yes/no) [no]?
What network address do you want to use [33.33.33.128]?
The combination of private netmask and network address will dictate
the maximum number of both nodes and private networks that can be
supported by a cluster. Given your private network address, this
program will generate a range of recommended private netmasks that are
based on the maximum number of nodes and private networks that you
anticipate for this cluster.

In specifying the anticipated maximum number of nodes and private
networks for this cluster, it is important that you give serious
consideration to future growth potential. While both the private
netmask and network address can be changed later, the tools for making
such changes require that all nodes in the cluster be booted in
noncluster mode.

Maximum number of nodes anticipated for future growth [2]?

Maximum number of private networks anticipated for future growth [2]?

Specify a netmask of 255.255.255.224 to meet anticipated future
requirements of 2 cluster nodes and 2 private networks.
To accommodate more growth, specify a netmask of 255.255.255.192 to
support up to 4 cluster nodes and 4 private networks.
What netmask do you want to use [255.255.255.224]? 255.255.255.128

>>> Minimum Number of Private Networks <<<

Each cluster is typically configured with at least two private
networks. Configuring a cluster with just one private interconnect
provides less availability and will require the cluster to spend more
time in automatic recovery if that private interconnect fails.

Should this cluster use at least two private networks (yes/no) [yes]? yes

>>> Point-to-Point Cables <<<

The two nodes of a two-node cluster may use a directly-connected
interconnect. That is, no cluster switches are configured. However,
when there are greater than two nodes, this interactive form of
scinstall assumes that there will be exactly one switch for each
private network.

Does this two-node cluster use switches (yes/no) [yes]? yes

>>> Cluster Switches <<<

All cluster transport adapters in this cluster must be cabled to a
"switch". And, each adapter on a given node must be cabled to a
different switch. Interactive scinstall requires that you identify one
switch for each private network in the cluster.

What is the name of the first switch in the cluster [switch1]?

What is the name of the second switch in the cluster [switch2]?

>>> Cluster Transport Adapters and Cables <<<

You must configure the cluster transport adapters for each node in the
cluster. These are the adapters which attach to the private cluster
interconnect.

Select the first cluster transport adapter for "nodo1":

1) ce1
2) ce3
3) ce4
4) ce5
5) eri0
6) eri1
7) Other

Option: 1

Will this be a dedicated cluster transport adapter (yes/no) [yes]? yes

Adapter "ce1" is an Ethernet adapter.

Searching for any unexpected network traffic on "ce1" ... done
Unexpected network traffic was seen on "ce1".
"ce1" may be cabled to a public network.

Do you want to use "ce1" anyway (yes/no) [no]? yes

The "dlpi" transport type will be set for this cluster.

For node "nodo1",

Name of the switch to which "ce1" is connected [switch1]?
Each adapter is cabled to a particular port on a switch. And, each
port is assigned a name. You can explicitly assign a name to each
port. Or, for Ethernet and Infiniband switches, you can choose to
allow scinstall to assign a default name for you. The default port
name assignment sets the name to the node number of the node hosting
the transport adapter at the other end of the cable.

For node "nodo1",

Use the default port name for the "ce1" connection (yes/no) [yes]? yes

Select the second cluster transport adapter for "nodo1":

1) ce1
2) ce3
3) ce4
4) ce5
5) eri0
6) eri1
7) Other

Option: 2

Will this be a dedicated cluster transport adapter (yes/no) [yes]? yes

Adapter "ce3" is an Ethernet adapter.

Searching for any unexpected network traffic on "ce3" ... done
Unexpected network traffic was seen on "ce3".
"ce3" may be cabled to a public network.

Do you want to use "ce3" anyway (yes/no) [no]? yes

For node "nodo1",

Name of the switch to which "ce3" is connected [switch2]?

For node "nodo1",

Use the default port name for the "ce3" connection (yes/no) [yes]? yes
For all other nodes, Autodiscovery is the best method for configuring the cluster
transport. However, you can choose to manually configure the remaining
adapters and cables.

Is it okay to use autodiscovery for the other nodes (yes/no) [yes]? no

For node "nodo2",

What is the name of the first cluster transport adapter? ce1

Will this be a dedicated cluster transport adapter (yes/no) [yes]? yes

Adapter "ce1" is an Ethernet adapter.

For node "nodo2",

Name of the switch to which "ce1" is connected [switch1]?

For node "nodo2",

Use the default port name for the "ce1" connection (yes/no) [yes]? yes

For node "nodo2",

What is the name of the second cluster transport adapter? ce3

Will this be a dedicated cluster transport adapter (yes/no) [yes]? yes

Adapter "ce3" is an Ethernet adapter.

For node "nodo2",

Name of the switch to which "ce3" is connected [switch2]?

For node "nodo2",

Use the default port name for the "ce3" connection (yes/no) [yes]? yes

>>> Quorum Configuration <<<

Every two-node cluster requires at least one quorum device. By
default, scinstall will select and configure a shared SCSI quorum disk
device for you.
This screen allows you to disable the automatic selection and
configuration of a quorum device.

The only time that you must disable this feature is when ANY of the
shared storage in your cluster is not qualified for use as a Sun
Cluster quorum device. If your storage was purchased with your
cluster, it is qualified. Otherwise, check with your storage vendor to
determine whether your storage device is supported as Sun Cluster
quorum device.

Do you want to disable automatic quorum device selection (yes/no) [no]? yes

>>> Global Devices File System <<<

Each node in the cluster must have a local file system mounted on
/global/.devices/node before it can successfully participate
as a cluster member. Since the "nodeID" is not assigned until
scinstall is run, scinstall will set this up for you.
You must supply the name of either an already-mounted file system or
raw disk partition which scinstall can use to create the global
devices file system. This file system or partition should be at least
512 MB in size.
If an already-mounted file system is used, the file system must be
empty. If a raw disk partition is used, a new file system will be
created for you.

The default is to use /globaldevices.

For node "nodo1",

Is it okay to use this default (yes/no) [yes]? yes

Testing for "/globaldevices" on "nodo1" ... done

For node "nodo2",

Is it okay to use this default (yes/no) [yes]? yes

Testing for "/globaldevices" on "nodo2" ... done

Is it okay to create the new cluster (yes/no) [yes]? yes

During the cluster creation process, sccheck is run on each of the new
cluster nodes. If sccheck detects problems, you can either interrupt
the process or check the log files after the cluster has been
established.

Interrupt cluster creation for sccheck errors (yes/no) [no]? no

Cluster Creation

Log file - /var/cluster/logs/install/scinstall.log.5518

Started sccheck on "nodo1".
Started sccheck on "nodo2".

sccheck completed with no errors or warnings for "nodo1".
sccheck completed with no errors or warnings for "nodo2".

Configuring "nodo2" ... done
Rebooting "nodo2" ... done
Configuring "nodo1" ... done

Rebooting "nodo1" ...
Log file - /var/cluster/logs/install/scinstall.log.5518
Rebooting ...

Pasos para configurar IPMP



En el nodo1, agregar las siguientes lineas en el /etc/hosts:



10.77.55.110 nodo1 nodo1.dominio.com loghost
10.77.55.108 nodo1-ce0-test
10.77.55.109 nodo1-ce2-test
10.77.55.111 nodo1-virtual
10.77.55.125 nodo2-virtual
10.77.55.124 nodo2
10.77.55.187 cluster-siebel
### Red Privada Oracle Rac
10.33.33.186 nodo1-priv-test0
10.33.33.187 nodo1-priv-test1
10.33.33.188 nodo1-priv
### Sun cluster interconnect ( clprivnet0 )
10.33.33.66 nodo1-priv-sc
10.10.33.65 nodo2-priv-sc


Crear en el /etc
los siguientes archivos:



/etc/hostname.ce0. Agregar lo siguiente:



nodo1-ce0-test group SIEBEL -failover deprecated \
netmask + broadcast + up

/etc/hostname.ce2. Agregar lo siguiente:




nodo1 group SIEBEL netmask + broadcast + up
addif nodo1-ce2-test -failover deprecated \
netmask + broadcast + up

/etc/hostname.ce4. Agregar lo siguiente:



nodo1-priv-test0 group rac -failover deprecated \
netmask + broadcast + up

/etc/hostname-ce5. Agregar lo siguiente:



nodo1-priv group rac netmask + broadcast + up
addif nodo1-priv-test1 -failover deprecated \
netmask + broadcast + up

Luego rebootear y chequear con ifconfig -a (luego repetir los mismos pasos en el nodo2).




nodo1 # ifconfig -a
ce0:flags=9040843
mtu 1500 index 2 inet 10.77.55.108 netmask ffffff00 broadcast 10.77.55.255
groupname SIEBEL ether 0:14:4f:67:ad:52
ce0:1: flags=1040843 mtu 1500 index 2
inet 10.77.55.187 netmask ffffff00 broadcast 10.77.55.255
ce1: flags=1008843 mtu 1500 index 8
inet 10.33.33.18 netmask fffffff8 broadcast 10.33.33.23 ether 0:14:4f:67:ad:53
ce2: flags=1000843 mtu 1500 index 3
inet 10.77.55.110 netmask ffffff00 broadcast 10.77.55.255
groupname SIEBEL ether 0:14:4f:68:50:d0
ce2:1: flags=9040843 mtu 1500 index 3 inet 10.77.55.109 netmask ffffff00 broadcast 10.77.55.255
ce2:2: flags=1040843 mtu 1500 index 3
inet 10.77.55.111 netmask ffffff00 broadcast 10.77.100.255
ce3: flags=1008843 mtu 1500 index 7
inet 10.33.33.10 netmask fffffff8 broadcast 10.33.33.15 ether 0:14:4f:68:50:d1
ce4: flags=9040843 mtu 1500 index 4 inet 10.33.33.186 netmask ffffffc0 broadcast 10.33.33.255
groupname rac ether 0:14:4f:68:50:d2
ce5: flags=1000843 mtu 1500 index 5
inet 10.33.33.188 netmask ffffffc0 broadcast 10.33.33.255
dman0: flags=1008843 mtu 1500 index 4
inet 10.2.1.7 netmask ffffffe0 broadcast 10.2.1.31 ether 0:0:be:aa:40:d5
clprivnet0: flags=1009843 mtu 1500 index 7
inet 10.33.33.66 netmask ffffffe0 broadcast 10.33.33.95 ether 0:0:0:0:0:2


Configurar el Quórum Devices, via clsetup (si todavía no esta creado):



nodo1 # didadm -L
1 nodo2:/dev/rdsk/c0t8d0 /dev/did/rdsk/d1
2 nodo2:/dev/rdsk/c0t9d0 /dev/did/rdsk/d2
3 nodo2:/dev/rdsk/c10t60060480000290101637533030353533d0 /dev/did/rdsk/d3
3 nodo1:/dev/rdsk/c2t60060480000290101637533030353533d0 /dev/did/rdsk/d3
4 nodo2:/dev/rdsk/c10t60060480000290101637533030344544d0 /dev/did/rdsk/d4
4 nodo1:/dev/rdsk/c2t60060480000290101637533030344544d0 /dev/did/rdsk/d4
5 nodo2:/dev/rdsk/c10t60060480000290101637533030344435d0 /dev/did/rdsk/d5
5 nodo1:/dev/rdsk/c2t60060480000290101637533030344435d0 /dev/did/rdsk/d5
6 nodo2:/dev/rdsk/c10t60060480000290101637533030344537d0 /dev/did/rdsk/d6
6 nodo1:/dev/rdsk/c2t60060480000290101637533030344537d0 /dev/did/rdsk/d6
7 nodo2:/dev/rdsk/c10t60060480000290101637533030344442d0 /dev/did/rdsk/d7
7 nodo1:/dev/rdsk/c2t60060480000290101637533030344442d0 /dev/did/rdsk/d7
8 nodo2:/dev/rdsk/c10t60060480000290101637533030344531d0 /dev/did/rdsk/d8
8 nodo1:/dev/rdsk/c2t60060480000290101637533030344531d0 /dev/did/rdsk/d8
9 nodo2:/dev/rdsk/c10t60060480000290101637533030353532d0 /dev/did/rdsk/d9
9 nodo1:/dev/rdsk/c2t60060480000290101637533030353532d0 /dev/did/rdsk/d9
10 nodo2:/dev/rdsk/c10t60060480000290101637533031393430d0 /dev/did/rdsk/d10
11 nodo2:/dev/rdsk/c10t60060480000290101637533031393246d0 /dev/did/rdsk/d11
11 nodo1:/dev/rdsk/c2t60060480000290101637533031393246d0 /dev/did/rdsk/d11
12 nodo2:/dev/rdsk/c8t5006048452A66157d0 /dev/did/rdsk/d12
12 nodo2:/dev/rdsk/c6t5006048452A66168d0 /dev/did/rdsk/d12
12 nodo1:/dev/rdsk/c4t5006048C52A66147d0 /dev/did/rdsk/d12
12 nodo1:/dev/rdsk/c6t5006048C52A66168d0 /dev/did/rdsk/d12
13 nodo2:/dev/rdsk/c10t60060480000290101637533031393335d0 /dev/did/rdsk/d13
13 nodo1:/dev/rdsk/c2t60060480000290101637533031393335d0 /dev/did/rdsk/d13
14 nodo2:/dev/rdsk/c0t10d0 /dev/did/rdsk/d14
15 nodo2:/dev/rdsk/c0t11d0 /dev/did/rdsk/d15
16 nodo1:/dev/rdsk/c0t8d0 /dev/did/rdsk/d16
17 nodo1:/dev/rdsk/c0t9d0 /dev/did/rdsk/d17
18 nodo1:/dev/rdsk/c0t10d0 /dev/did/rdsk/d18
19 nodo1:/dev/rdsk/c0t11d0 /dev/did/rdsk/d19
20 nodo1:/dev/rdsk/c2t60060480000290101637533031393342d0 /dev/did/rdsk/d20
8191 nodo2:/dev/rmt/0 /dev/did/rmt/1

nodo1 # clsetup
>>> Initial Cluster Setup <<<
This program has detected that the cluster "installmode" attribute is
still enabled. As such, certain initial cluster setup steps will be
performed at this time. This includes adding any necessary quorum
devices, then resetting both the quorum vote counts and the
"installmode" property.
Please do not proceed if any additional nodes have yet to join the
cluster.
Is it okay to continue (yes/no) [yes]? yes
Do you want to add any quorum devices (yes/no) [yes]? yes
Following are supported Quorum Devices types in Sun Cluster. Please
refer to Sun Cluster documentation for detailed information on these
supported quorum device topologies.

What is the type of device you want to use?
1) Directly attached shared disk
2) Network Attached Storage (NAS) from Network Appliance
3) Quorum Server
q) Return to the quorum menu

Option: 1
>>> Add a SCSI Quorum Disk <<<
A SCSI quorum device is considered to be any Sun Cluster supported
attached storage which connected to two or more nodes of the cluster.
Dual-ported SCSI-2 disks may be used as quorum devices in two-node
clusters. However, clusters with more than two nodes require that
SCSI-3 PGR disks be used for all disks with more than two node-to-disk
paths.
You can use a disk containing user data or one that is a member of a
device group as a quorum device.
For more information on supported quorum device topologies, see the
Sun Cluster documentation.
Is it okay to continue (yes/no) [yes]? yes
Which global device do you want to use (d)? d3
Is it okay to proceed with the update (yes/no) [yes]? yes

clquorum add d3

Command completed successfully.

Press Enter to continue:
Do you want to add another quorum device (yes/no) [yes]? no
Once the "installmode" property has been reset, this program will skip
"Initial Cluster Setup" each time it is run again in the future.
However, quorum devices can always be added to the cluster using the
regular menu options. Resetting this property fully activates quorum
settings and is necessary for the normal and safe operation of the cluster.
Is it okay to reset "installmode" (yes/no) [yes]? yes
clquorum reset
claccess deny-all
Cluster initialization is complete.

Type ENTER to proceed to the main menu:

*** Main Menu ***
Please select from one of the following options:

1) Quorum
2) Resource groups
3) Data Services
4) Cluster interconnect
5) Device groups and volumes
6) Private hostnames
7) New nodes
8) Other cluster tasks
?) Help with menu options
q) Quit

Option: q


Habilitar el reboot automático de los nodos, si TODOS (ALL) los monitores Path de Discos Fallan.




nodo1 # clnode set -p reboot_on_path_failure=enable nodo1 nodo2

Deshabilitar el monitoreo en todos los discos locales.



nodo1 # cldev status

Cluster DID Devices ===

Device Instance Node Status
--------------- ---- ------
/dev/did/rdsk/d1 nodo2 Ok
/dev/did/rdsk/d10 nodo2 Ok
/dev/did/rdsk/d11 nodo2 Ok
nodo1 Ok
/dev/did/rdsk/d12 nodo2 Ok
nodo1 Ok
/dev/did/rdsk/d13 nodo2 Ok
nodo1 Ok
/dev/did/rdsk/d14 nodo2 Ok
/dev/did/rdsk/d15 nodo2 Ok
/dev/did/rdsk/d16 nodo1 Ok
/dev/did/rdsk/d17 nodo1 Ok
/dev/did/rdsk/d18 nodo1 Ok
/dev/did/rdsk/d19 nodo1 Ok
/dev/did/rdsk/d2 nodo2 Ok
/dev/did/rdsk/d20 nodo1 Ok
/dev/did/rdsk/d3 nodo2 Ok
nodo1 Ok
/dev/did/rdsk/d4 nodo2 Ok
nodo1 Ok
/dev/did/rdsk/d5 nodo2 Ok
nodo1 Ok
/dev/did/rdsk/d6 nodo2 Ok
nodo1 Ok
/dev/did/rdsk/d7 nodo2 Ok
nodo1 Ok
/dev/did/rdsk/d8 nodo2 Ok
nodo1 Ok
/dev/did/rdsk/d9 nodo2 Ok
nodo1 Ok

nodo1 # didadm -L
1 nodo2:/dev/rdsk/c0t8d0 /dev/did/rdsk/d1
2 nodo2:/dev/rdsk/c0t9d0 /dev/did/rdsk/d2
3 nodo2:/dev/rdsk/c10t60060480000290101637533030353533d0 /dev/did/rdsk/d3
3 nodo1:/dev/rdsk/c2t60060480000290101637533030353533d0 /dev/did/rdsk/d3
4 nodo2:/dev/rdsk/c10t60060480000290101637533030344544d0 /dev/did/rdsk/d4
4 nodo1:/dev/rdsk/c2t60060480000290101637533030344544d0 /dev/did/rdsk/d4
5 nodo2:/dev/rdsk/c10t60060480000290101637533030344435d0 /dev/did/rdsk/d5
5 nodo1:/dev/rdsk/c2t60060480000290101637533030344435d0 /dev/did/rdsk/d5
6 nodo2:/dev/rdsk/c10t60060480000290101637533030344537d0 /dev/did/rdsk/d6
6 nodo1:/dev/rdsk/c2t60060480000290101637533030344537d0 /dev/did/rdsk/d6
7 nodo2:/dev/rdsk/c10t60060480000290101637533030344442d0 /dev/did/rdsk/d7
7 nodo1:/dev/rdsk/c2t60060480000290101637533030344442d0 /dev/did/rdsk/d7
8 nodo2:/dev/rdsk/c10t60060480000290101637533030344531d0 /dev/did/rdsk/d8
8 nodo1:/dev/rdsk/c2t60060480000290101637533030344531d0 /dev/did/rdsk/d8
9 nodo2:/dev/rdsk/c10t60060480000290101637533030353532d0 /dev/did/rdsk/d9
9 nodo1:/dev/rdsk/c2t60060480000290101637533030353532d0 /dev/did/rdsk/d9
10 nodo2:/dev/rdsk/c10t60060480000290101637533031393430d0 /dev/did/rdsk/d10
11 nodo2:/dev/rdsk/c10t60060480000290101637533031393246d0 /dev/did/rdsk/d11
11 nodo1:/dev/rdsk/c2t60060480000290101637533031393246d0 /dev/did/rdsk/d11
12 nodo2:/dev/rdsk/c8t5006048452A66157d0 /dev/did/rdsk/d12
12 nodo2:/dev/rdsk/c6t5006048452A66168d0 /dev/did/rdsk/d12
12 nodo1:/dev/rdsk/c4t5006048C52A66147d0 /dev/did/rdsk/d12
12 nodo1:/dev/rdsk/c6t5006048C52A66168d0 /dev/did/rdsk/d12
13 nodo2:/dev/rdsk/c10t60060480000290101637533031393335d0 /dev/did/rdsk/d13
13 nodo1:/dev/rdsk/c2t60060480000290101637533031393335d0 /dev/did/rdsk/d13
14 nodo2:/dev/rdsk/c0t10d0 /dev/did/rdsk/d14
15 nodo2:/dev/rdsk/c0t11d0 /dev/did/rdsk/d15
16 nodo1:/dev/rdsk/c0t8d0 /dev/did/rdsk/d16
17 nodo1:/dev/rdsk/c0t9d0 /dev/did/rdsk/d17
18 nodo1:/dev/rdsk/c0t10d0 /dev/did/rdsk/d18
19 nodo1:/dev/rdsk/c0t11d0 /dev/did/rdsk/d19
20 nodo1:/dev/rdsk/c2t60060480000290101637533031393342d0 /dev/did/rdsk/d20
8191 nodo2:/dev/rmt/0 /dev/did/rmt/1

nodo1 # cldev unmonitor d1

nodo1 # cldev unmonitor d2

nodo1 # cldev unmonitor d14

nodo1 # cldev unmonitor d15

nodo1 # cldev unmonitor d16

nodo1 # cldev unmonitor d17

nodo1 # cldev unmonitor d18

nodo1 # cldev unmonitor d19


Verificar que se dejo de monitorear los discos locales, ejecutar cldev show o scdpm-p
all:all
.




nodo1 # clnode show
Cluster Nodes ===
Node Name: nodo2
Node ID: 1
Enabled: yes
privatehostname: clusternode1-priv
reboot_on_path_failure: enabled
globalzoneshares: 1
defaultpsetmin: 1
quorum_vote: 1
quorum_defaultvote: 1
quorum_resv_key: 0x48221C3C00000001
Transport Adapter List: ce1, ce3

Node Name: nodo1
Node ID: 2
Enabled: yes
privatehostname: clusternode2-priv
reboot_on_path_failure: enabled
globalzoneshares: 1
defaultpsetmin: 1
quorum_vote: 1
quorum_defaultvote: 1
quorum_resv_key: 0x48221C3C00000002
Transport Adapter List: ce1, ce3

nodo1 # scdpam -p all:all
nodo2:reboot_on_path_failure enabled
nodo2:/dev/did/rdsk/d1 Unmonitored
nodo2:/dev/did/rdsk/d2 Unmonitored
nodo2:/dev/did/rdsk/d3 Ok
nodo2:/dev/did/rdsk/d4 Ok
nodo2:/dev/did/rdsk/d5 Ok
nodo2:/dev/did/rdsk/d6 Ok
nodo2:/dev/did/rdsk/d7 Ok
nodo2:/dev/did/rdsk/d8 Ok
nodo2:/dev/did/rdsk/d9 Ok
nodo2:/dev/did/rdsk/d10 Ok
nodo2:/dev/did/rdsk/d11 Ok
nodo2:/dev/did/rdsk/d12 Ok
nodo2:/dev/did/rdsk/d13 Ok
nodo2:/dev/did/rdsk/d14 Unmonitored
nodo2:/dev/did/rdsk/d15 Unmonitored
nodo1:reboot_on_path_failure enabled
nodo1:/dev/did/rdsk/d16 Unmonitored
nodo1:/dev/did/rdsk/d17 Unmonitored
nodo1:/dev/did/rdsk/d18 Unmonitored
nodo1:/dev/did/rdsk/d19 Unmonitored
nodo1:/dev/did/rdsk/d20 Ok
nodo1:/dev/did/rdsk/d3 Ok
nodo1:/dev/did/rdsk/d4 Ok
nodo1:/dev/did/rdsk/d5 Ok
nodo1:/dev/did/rdsk/d6 Ok
nodo1:/dev/did/rdsk/d7 Ok
nodo1:/dev/did/rdsk/d8 Ok
nodo1:/dev/did/rdsk/d9 Ok
nodo1:/dev/did/rdsk/d11 Ok
nodo1:/dev/did/rdsk/d12 Ok
nodo1:/dev/did/rdsk/d13 Ok


Para ejecutar solo en Solaris 10, si no esta instalado el paquete SUNWescom, siga estos pasos.



Deshabilitar el servicio scsymon.



nodo1 # svcadm -v disable \
system/var/system/var/cluster/scsymon-srv

svc:/system/cluster/scsymon-srv:default disabled.

Borrar la linea $SVCADM enable svc:/system/cluster/scsymon-srv:default en el archivo

/usr/cluster/lib/svc/method/svc_cl_enable.


Tambien borrar la linea svc:/system/cluster/scsymon-srv:default" en el archivo
/usr/cluster/lib/svc/method/svc_boot_check.


Hay unas comillas (") en el final de la linea svc:/system/cluster/scsymon-srv:default". Muevelas a la linea anterior.



nodo1 # cd /usr/cluster/lib/svc/method
nodo1 # vi svc_cl_svc_enable
"svc_cl_svc_enable" [Read only] 80 lines, 2154 characters
#!/sbin/sh
#
"svc_cl_svc_enable" [Read only] 80 lines, 2154 characters
nodo1 # vi svc_boot_check
"svc_boot_check" [Read only] 203 lines, 6284 characters
#!/bin/sh
#
svc:/system/cluster/sc_svtag:default
svc:/system/cluster/scsymon-srv:default"
:wq! "svc_boot_check" 203 lines, 6288 characters

nodo1 # scshutdown -y -g0


Si necesito modificar el archivo infrastructure a mano, entonces tengo que bootear por fuera del cluster con la opción -x y correr un ccradm y finalmente
rebootear.



{181} ok boot -x

root@nodo1 # /usr/cluster/lib/sc/ccradm -i \
/etc/cluster/ccr/infrastructure -o


nodo1 # init 6
nodo1 # clinterconnect status
Cluster Transport Paths ===
Endpoint1 Endpoint2 Status
--------- --------- ------
nodo2:ce3 nodo1:ce3 Path online
nodo2:ce1 nodo1:ce1 Path online

Pasos para mirrorear disco de booteo



Inicializar el disco de mirror:



vxdisksetup -i c0t11d0 format=sliced

Agregar el disco al diskgroup.




vxdg -g rootdg adddisk rootmirror=c0t11d0s2

Mirrorear el disco.



vxmirror -g rootdg rootdisk rootmirror

Crear un diskgroup local con sus respectivos volumenes.



vxdisksetup -i Disk_5
vxdg init localdg localdg01=Disk_5
vxassist -g localdg make vol01 2g localdg01
newfs /dev/vx/rdsk/localdg01/vol01

Pasos para configurar cluster con Oracle RAC


Oracle RAC solo esta soportado en zonas globales.



En caso de que la versión de Oracle sea la 10g, se debera
instalar primero el Oracle CRS.



Chequear que esten instaladas las licencias de CVM en el caso de tener VxVM.



vxlicrep y vxlicinst

Crear el grupo y usuario para Oracle.



groupadd -g 1000 dba
groupadd -g 1001 oinstall
useradd -u 1000 -g 1000 -G 1001 -c "oracle DBA" -m -d \
/export/home/oracle -s /usr/bin/ksh oracle
chown oracle:dba /export/home/oracle
passwd oracle

Instalar los servicios de RAC en ambos nodos. Es necesario tener instalado:




SUNWscucmd
SUNWudlm
SUNWudlmr

Para SVM se necesita SUNWscmd.


Para CVM se necesita SUNWcvm,
SUNWcvmr
.


Tambien el SUNscor es necesario para que el wizard de los data service funcione correctamente.



Instalar el paquete UDLM de Oracle.





pkgadd -d .ORCLudlm

Agregar en el /etc/system,
los parámetros recomendado para Oracle RAC.



set semsys:seminfo_semmni=100
set semsys:seminfo_semmns=1024
set semsys:seminfo_semmsl=256
set semsys:seminfo_semvmx=32767
set shmsys:shminfo_shmmax=4294967295
set shmsys:shminfo_shmmin=1
set shmsys:shminfo_shmmni=100
set shmsys:shminfo_shmseg=10

Luego reboot del equipo.



Cuidado: Cambiar el port 6000 que viene seteado por default en el /opt/SUNWudlm/etc/udlm.conf, porque puede traer conflictos con el port del sshd.


Una opción valida es el port 7000. Esto hacerlo cuando no este ejecutandose el dlm.


Crear RGM para el Oracle RAC DataService.




clrt register SUNW.rac_framework
cltr register SUNW.rac_udlm

clrg create -n nodo1,nodo2 -p maximum_primaries=2 -p desired_primaries=2 -p rg_mode=Scalable rac-framework-rg
clrs create -g rac-framework-rg -t SUNW.rac_framework-rg rac-framework-rs
clrs create -g rac-framework-rg -t SUNW.rac_udlm -p port=7000 -p resource_dependencies=rac-framework-rs rac-udlm-rs
clrs create -g rac-framework-rg -t SUNW.rac_svm -p resource_dependencies=rac-framework-rs rac-svm-rs

Poner el recurso online.


clrg online -emM rac-framework-rg


Pasos para Crear Raw Devices para Oracle RAC.



vxassist -g racdg -U gen make ocr1 1g
vxedit -g racdg set user=oracle group=dba ocr1

Crear el Resource Group Scalable.




clrg create -n nodo1,nodo2 -p Desired_primaries=2 -p Maximum_primaries=2 -p RG_affinities=++rac-framework-rg RG_mode=Scalable scal-racdg-rg

clrt register SUNW.ScalDeviceGroup
clrs create -g scal-racdg-rg -t SUNW.ScalDeviceGroup -p Resource_dependencies=rac-cvm-rs -p DiskGroupName=racdg scal-racdg-rs
clrg online -emM scal-racdg-rg

Crear el recurso CRS.



clrt register SUNW.crs_framework
clrs create -g rac-framework-rg -t SUNW.crs_framework -p Resource_dependencies=rac-framework-rs -p Resource_dependencies_offline_restart{local_node} crs_framework-rs

Resource_dependencies_offline_restart
es necesario, solo si esta creado un resource group Scalable.




Crear un Proxy resource group para el Oracle RAC.


Si no se usa un device group Scalable, hay que remover las dependencias a el.



clrg create -n nodo1,nodo2 -p Maximum_primaries=2 -p Desired_primaries=2 -p RG_mode=Scalable -p RG_affinities=++rac-framework-rg,++ scal-racdg-rg rac-proxy-rg

clrt register SUNW.scalable_rac_server_proxy

clrs create -g rac-proxy-rg -t SUNW.scalable_rac_server_proxy

clrs create -g rac-proxy-rg -t SUNW.scalable_rac_server_proxy -p ORACLE_HOME=/oracle/product/10.2.0.2/db -p CRS_HOME=/oracle/product/10.2.0.2/crs -p DB_NAME=SUNPR -p ORACLE_SID{nodo1}=SUNPR1 -p ORACLE_SID{nodo2}=SUNPR2 -p Resource_dependencies=rac-framework-rs -p Resource_dependencies_offline_restart=scal-rac-rs,crs_framework-rs rac-proxy-rs

clrg online -emM rac-proxy-rg