unix sysadmin archives
Donation will make us pay more time on the project:
          

Monday 24 October 2011

sesudo


Executes commands that require superuser authority on behalf of a regular user.
SYNOPSIS
sesudo [[ -h ] | [command [parameters]]
DESCRIPTION
The sesudo command borrows the permissions of another user (known as the target user) to perform one or more commands. This enables regular users to perform actions that require superuser authority, such as the mount command. The rules governing the user's authority to perform the command are defined in the SUDO class.
Notes
  • You must define the access rules for the user in the SUDO class. The definition may specify commands that the user can use and commands that the user is prohibited from using.
  • The output depends on the command that is being executed. Error messages are sent to the standard error device (stderr), usually defined as the terminal screen.
  • To execute the sudo command, the user should specify the following command at the UNIX shell prompt:
    sesudo profile_name
    
  • You can choose whether the command is displayed before it is executed. The default value is that commands are not displayed. To display commands, change the value in the echo_command token in the sesudo section of the seos.ini file.
Arguments
-h
Displays the help screen.
command [parameters]
Specifies the command that is to be performed on behalf of the user. The command name must be the name of a record in the SUDO class. Multiple parameters can be specified, provided they are separated by spaces.
Prerequisites: Define SUDO Commands
Several steps must be performed before it is possible to use the sesudo command. The first step needs to be done only once. Other steps need to be done every time a new user is given the authority to execute the sesudo command, or every time a new profile is defined in the SUDO class.
  1. Define the sesudo program as a trusted setuid program owned by root. This step only needs to be done once per TACF installation. The format of the command is:
    newres PROGRAM /usr/seos/bin/sesudo defaccess(NONE)
    
  2. Give a user the authority to execute the sesudo program. Do this once for every user who is entitled to this authority. The format of the command is:
    authorize PROGRAM /usr/seos/bin/sesudo/uid(user_name)
    
  3. Permit the user to surrogate to the target user using the sesudo program. Do this for every user who should have this authority, and do it for every target user ID that you want to make available to the user. The format of the command is:
    authorize SURROGATE USER.root uid(user_name) \
    via(pgm(/usr/seos/bin/sesudo))
    
  4. Define new records in the SUDO class for every command to be executed by users. For each command script, you can define permitted and forbidden parameters, permitted users, and password protection. If no parameters are specified as permitted or prohibited, then all parameters are permitted. The format of the command is:
    newres SUDO profile_name \
    data('cmd[;[prohibited-params][;permitted-params]]')
    

    A command can have prohibited and permitted parameters for each operand. The prohibited parameters and the permitted parameters for each operand are separated by the pipe symbol (|). The format is:

    newres SUDO profile_name \
    data('cmd;pro1|pro2|...|proN;per1|per2|...|perN')
    

    sesudo checks each parameter entered by the user in the following manner:
    1. Test if parameter number N matches permitted parameter N. (If permitted parameter N does not exist, the last permitted parameter is used.)
    2. Test if parameter number N matches prohibited parameter N. (If prohibited parameter N does not exist, the last prohibited parameter is used.)

    Only if all the parameters match permitted parameters, and none match prohibited parameters, does sesudo execute the command.
  5. Permit the user to access the profile that has been defined in the SUDO class. Do this for every profile a user should be able to access. The format of the command is:
    authorize SUDO profile_name uid(user_name)
    
    If defacess is none, specify each user who is granted permission with the authorize command. If defaccess is not set otherwise, use the authorize command to specify each user to whom access is forbidden.
  6. The sesudo command can display the command before executing it. Display depends on the value in the echo_command token in the [sesudo] section of the seos.ini file. The default value calls for no display, but the value can be changed.
  7. The output of the sesudo command depends on the command being performed. Error messages are sent to the standard error device (stderr), usually defined as the terminal's screen.
SUDO Record: Parameters and Variables
The special parameters used in connection with the SUDO record are explained in the following list:
profile_name
The name the security administrator gives to the superuser command.
cmd
The superuser command that a normal user can execute.
prohibited parameters
The parameters that you prohibit the regular user from invoking. These parameters may contain patterns or variables.
permitted parameters
The parameters that you specifically allow the regular user to invoke. These parameters may contain patterns or variables.
Prohibited and permitted parameters may also contain variables as described in the following list:
$A
Alphabetic value
$G
Existing TACF group name
$H
Home path pattern of the user
$N
Numeric value
$O
Executor's user name
$U
Existing TACF user name
$f
Existing file name
$g
Existing UNIX group name
$h
Existing host name
$r
Existing UNIX file name with UNIX read permission
$u
Existing UNIX user name
$w
Existing UNIX file name with UNIX write permission
$x
Existing UNIX file name with UNIX exec permission
Return Value
Each time the sesudo command runs, it returns one of the following values:
-2
Target user not found, or command interrupted
-1
Password error
0
Execution successful
10
Problem with usage of parameters
20
Target user error
30
Authorization error
EXAMPLES
  1. If you do not allow any parameters, define the profile in the following way:
    newres SUDO profile_name data('cmd;*')
    
  2. If you want to allow the user to invoke the name parameter, do the following:
    newres SUDO profile_name data('cmd;;NAME')
    
    In the previous example, the only parameter the user can enter is NAME.
  3. If you want to prevent the user from using -9 and -HUP but you permit the user to use all other parameters, do the following:
    newres SUDO profile_name data('cmd;-9 -HUP;*')
    
  4. If there are two prohibited parameters, the first is the UNIX user name and the second is the UNIX group name, and there are two permitted parameters, the first can be numeric and the second can be alphabetic, enter the following:
    newres SUDO profile_name \
    data('cmd;$u | $g ;$N | $A')
    
    The user cannot enter the UNIX user name, but can enter a numeric parameter for the first operand; and the user cannot enter the UNIX group name but can enter an alphabetic parameter for the second operand.
  5. If there are several prohibited parameters for several operands in the command, enter the following:
    newres SUDO profile_name \
    data('cmd;pro1 pro2 | pro3 pro4 | pro5 pro6')
    
    pro1 and pro2 are the prohibited parameters of the first operand of the command; pro3 and pro4 are the prohibited parameters of the second operand of the command; and pro5 and pro6 are the prohibited parameters of the third operand of the command.

Thursday 20 October 2011

PICL bug causes Solaris 10 prtdiag to hang

The Solaris PICL framework provides information about the system configuration which it maintains in the PICL tree. I have an experience wherein Solaris 10 prtdiag is hanging. In order to fix this stop and start picld.

# top
load averages: 1582.95, 1462.52, 1345.91 22:57:54
8548 processes:8532 sleeping, 1 running, 1 zombie, 14 on cpu
CPU states: % idle, % user, % kernel, % iowait, % swap
Memory: 8064M real, 2747M free, 4123M swap in use, 9005M swap free
PID USERNAME LWP PRI NICE SIZE RES STATE TIME CPU COMMAND
13622 root 999 59 0 318M 297M sleep 370.3H 82.48% java
26222 root 1 0 0 0K 0K cpu/6 7:52 5.88% ps
25217 root 1 20 0 0K 0K sleep 11:59 5.49% ps
27618 root 1 0 0 0K 0K cpu/5 1:08 5.43% ps
27101 root 1 0 0 0K 0K cpu/4 3:31 5.16% ps

***You can see here that PID 13622 is using alot of CPU.
***And when you check this, it points to prtdiag 

# ps -ef | grep 13622

root 23066 13622 0 Oct 15 ? 0:00 /usr/bin/ctrun -l child -o pgrponly /bin/sh -c /usr/sbin/prtdiag
root 802 13622 0 Oct 15 ? 0:00 /usr/bin/ctrun -l child -o pgrponly /bin/sh -c /usr/sbin/prtdiag
root 28092 13622 0 Oct 15 ? 0:00 /usr/bin/ctrun -l child -o pgrponly /bin/sh -c /usr/sbin/prtdiag

***Restart the PICL
# svcadm restart picl

***Check the load via uptime

# uptime
1:26am up 50 day(s), 11:22, 3 users, load average: 1886.79, 1513.28, 1402.57

***After a couple of minutes check it again
# uptime
1:26am up 50 day(s), 11:23, 3 users, load average: 962.59, 1327.11, 1343.05

***You can observe a dramatic drop on the load
# top
load averages: 3.71, 367.87, 875.60 01:33:23
76 processes: 75 sleeping, 1 on cpu
CPU states: % idle, % user, % kernel, % iowait, % swap
Memory: 8064M real, 5630M free, 1103M swap in use, 12G swap free

PID USERNAME LWP PRI NICE SIZE RES STATE TIME CPU COMMAND
15726 root 1 42 0 108M 33M sleep 21:44 2.89% bptm
11116 root 1 52 0 108M 33M sleep 10:50 0.76% bptm
13622 root 78 59 0 215M 200M sleep 403.3H 0.12% java
15611 root 1 59 0 108M 67M sleep 5:09 0.10% bptm
16953 root 1 0 0 4432K 2160K cpu/11 0:00 0.08% top

Monday 17 October 2011

Introduction to Veritas Cluster Services

In any organization, every server in the network will have a specific purpose in terms of  it’s usage, and most of the times these servers are used to provide stable environment to run software applications that are required for organization’s business. Usually, these applications are very critical for the business,  and organizations cannot afford to let them down even for minutes.  For Example: A bank having an application which takes care of it’s internet banking.

If it was not critical in terms of business the organization can considered to run applications as standalone, in other words whenever the application down it wont impact the actual business.
Usually, the application clients for these application will connect to the application server using the server name , server IP or specific application IP.

Let us assume, if the organization is having an application which is very critical for it’s business  and any impact to the application will cause huge loss to the organization. In that case, organization is having  one option to reduce the impact of the application failure due to the Operating system or Hardware failure, i.e Purchasing a secondary server with same hardware configuration ,  install same kind of OS & Database, and configure it with the same application in passive mode. And “failover” the application from primary server to these secondary server whenever there is an issue with underlying hardware/operating system of primary server. Thus, we call it Application Server with Highly Available Configuration

Whenever there is an issue related to the primary server  which  make application unavailable to the client machines, the application should be moved to another available server in the network either by manual or automatic intervention. Transferring application from primary server to the secondary server and making secondary server active for the application  is called “failover” operation. And the reverse Operation (i.e. restoring application on primary server ) is called “Failback“. Thus, we can call this configuration as application HA ( Highly Available ) setup compared to the earlier Standalone setup.

Now the question is, how is this manual fail over works when there is an application issue due to Hardware/Operating System?

Manual Failover basically involves below steps:

     1. Application IP should failover secondary node
     2. Same Storage  and Data  should be available on the secondary node
     3. Finally application should failover to the secondary node.

Challenges in Manual Failover Configuration

    1. Continuously monitor resources.
    2. Time Consuming
    3. Technically complex when it involves more dependent components for the application.

On the other hand, we can use Automatic Failover Softwares which can do the work without human intervention. It groups both primary server and secondary server  related to the application, and always keep an eye on primary server for any failures and failover the application to secondary server automatically when ever there is an issue with primary server.

Although we are having two different servers supporting the application, both of them are actually serving the  same purpose. And from the application client perspective they both  should be treated as single application cluster server ( composed of multiple physical servers in the background).

Now, you know that cluster is nothing but “group of individual servers working together to server the same purpose ,and appear as a single machine to the external world”.

What  are the Cluster Software available in the market, today?  There are many, depending on the Operating System and Application to be supported. Some of them native to the Operating System , and others from the third party vendor

List of Cluster Software available in the market

    *SUN Cluster Services – Native Solaris Cluster
    *Linux Cluster Server – Native Linux cluster
    *Oracle RAC – Application level cluster for Oracle database that works on different Operating Systems
    *Veritas Cluster Services – Third Party Cluster Software works on Different Operating Systems like Solaris / Linux/ AIX / HP UX.
    *HACMP – IBM AIX based Cluster Technology
    *HP UX native Cluster Technology

Note: In this post, we are actually discussing about VCS and its Operations. This post is not going to cover the actual implementation part or any command syntax of VCS, but will cover the concept how VCS makes application Highly Available(HA).

Veritas Cluster Services Components
VCS is having two types of Components 1. Physical Components 2. Logical Components

Physical Components:
1. Nodes
VCS nodes host the service groups (managed applications). Each system is connected to networking hardware, and usually also to storage hardware. The systems contain components to provide resilient management of the applications, and start and stop agents.
Nodes can be individual systems, or they can be created with domains or partitions on enterprise-class systems. Individual cluster nodes each run their own operating system and possess their own boot device. Each node must run the same operating system within a single VCS cluster.
Clusters can have from 1 to 32 nodes. Applications can be configured to run on specific nodes within the cluster.

2. Shared storage
Storage is a key resource of most applications services, and therefore most service groups. A managed application can only be started on a system that has access to its associated data files. Therefore, a service group can only run on all systems in the cluster if the storage is shared across all systems. In many configurations, a storage area network (SAN) provides this requirement.
You can use I/O fencing technology for data protection. I/O fencing blocks access to shared storage from any system that is not a current and verified member of the cluster.

3. Networking Components
Networking in the cluster is used for the following purposes:
    *Communications between the cluster nodes and the Application Clients and external systems.
    *Communications between the cluster nodes, called Heartbeat network.


Logical Components
1. Resources
Resources are hardware or software entities that make up the application. Resources include disk groups and file systems, network interface cards (NIC), IP addresses, and applications.
    1.1. Resource dependencies
    Resource dependencies indicate resources that depend on each other because of application or operating system requirements. Resource dependencies are graphically depicted in a hierarchy, also     called a tree, where the resources higher up (parent) depend on the resources lower down (child).
   
    1.2. Resource types
    VCS defines a resource type for each resource it manages. For example, the NIC resource type can be configured to manage network interface cards. Similarly, all IP addresses can be configured         using the IP resource type.
    VCS includes a set of predefined resources types. For each resource type, VCS has a corresponding agent, which provides the logic to control resources.

2. Service groups
A service group is a virtual container that contains all the hardware and software resources that are required to run the managed application. Service groups allow VCS to control all the hardware and software resources of the managed application as a single unit. When a failover occurs, resources do not fail over individually— the entire service group fails over. If there is more than one service group on a system, a group may fail over without affecting the others.

A single node may host any number of service groups, each providing a discrete service to networked clients. If the server crashes, all service groups on that node must be failed over elsewhere.

Service groups can be dependent on each other. For example a finance application may be dependent on a database application. Because the managed application consists of all components that are required to provide the service, service group dependencies create more complex managed applications. When you use service group dependencies, the managed application is the entire dependency tree.

2.1. Types of service groups

VCS service groups fall in three main categories: failover, parallel, and hybrid.

   * Failover service groups
    A failover service group runs on one system in the cluster at a time. Failover groups are used for most applications that do not support multiple systems to simultaneously access the application’s data.

   * Parallel service groups
    A parallel service group runs simultaneously on more than one system in the cluster. A parallel service group is more complex than a failover group. Parallel service groups are appropriate for     applications that manage multiple application instances running simultaneously without data corruption.

   * Hybrid service groups
    A hybrid service group is for replicated data clusters and is a combination of the failover and parallel service groups. It behaves as a failover group within a system zone and a parallel group across     system zones.

3. VCS Agents
Agents are multi-threaded processes that provide the logic to manage resources. VCS has one agent per resource type. The agent monitors all resources of that type; for example, a single IP agent manages all IP resources.
When the agent is started, it obtains the necessary configuration information from VCS. It then periodically monitors the resources, and updates VCS with the resource status.

4.  Cluster Communications and VCS Daemons
Cluster communications ensure that VCS is continuously aware of the status of each system’s service groups and resources. They also enable VCS to recognize which systems are active members of the cluster, which have joined or left the cluster, and which have failed.

4.1. High availability daemon (HAD)
    The VCS high availability daemon (HAD) runs on each system. Also known as the VCS engine, HAD is responsible for:

       * building the running cluster configuration from the configuration files
       * distributing the information when new nodes join the cluster
       * responding to operator input
       * taking corrective action when something fails.

    The engine uses agents to monitor and manage resources. It collects information about resource states from the agents on the local system and forwards it to all cluster members. The local engine     also receives information from the other cluster members to update its view of the cluster.

    The hashadow process monitors HAD and restarts it when required.

4.2.  HostMonitor daemon
    VCS also starts HostMonitor daemon when the VCS engine comes up. The VCS engine creates a VCS resource VCShm of type HostMonitor and a VCShmg service group. The VCS engine does not     add these objects to the main.cf file. Do not modify or delete these components of VCS. VCS uses the HostMonitor daemon to monitor the resource utilization of CPU and Swap. VCS reports to the     engine log if the resources cross the threshold limits that are defined for the resources.

4.3.  Group Membership Services/Atomic Broadcast (GAB)
    The Group Membership Services/Atomic Broadcast protocol (GAB) is responsible for cluster membership and cluster communications.

    * Cluster Membership
    GAB maintains cluster membership by receiving input on the status of the heartbeat from each node by LLT. When a system no longer receives heartbeats from a peer, it marks the peer as DOWN and     excludes the peer from the cluster. In VCS, memberships are sets of systems participating in the cluster.

    * Cluster Communications
    GAB’s second function is reliable cluster communications. GAB provides guaranteed delivery of point-to-point and broadcast messages to all nodes. The VCS engine uses a private IOCTL (provided     by GAB) to tell GAB that it is alive.

4.4. Low Latency Transport (LLT)
    VCS uses private network communications between cluster nodes for cluster maintenance. Symantec recommends two independent networks between all cluster nodes. These networks provide the     required redundancy in the communication path and enable VCS to discriminate between a network failure and a system failure. LLT has two major functions.

    * Traffic Distribution
    LLT distributes (load balances) internode communication across all available private network links. This distribution means that all cluster communications are evenly distributed across all private     network links (maximum eight) for performance and fault resilience. If a link fails, traffic is redirected to the remaining links.

    * Heartbeat
    LLT is responsible for sending and receiving heartbeat traffic over network links. The Group Membership Services function of GAB uses this heartbeat to determine cluster membership.

4.5. I/O fencing module
    The I/O fencing module implements a quorum-type functionality to ensure that only one cluster survives a split of the private network. I/O fencing also provides the ability to perform SCSI-3 persistent     reservations on failover. The shared disk groups offer complete protection against data corruption by nodes that are assumed to be excluded from cluster membership.

5. VCS Configuration files.

    5.1. main.cf
    /etc/VRTSvcs/conf/config/main.cf is key file in terms  of VCS configuration. The “main.cf”  file basically explains below information to the VCS agents/VCS daemons.
      What are the Nodes available in the Cluster?
      What are the Service Groups Configured for each node?
      What are the resources available in each Service Group, the types of resources and it’s attributes?
      What are the dependencies each resource having on other resources?
      What are the dependencies each service group having on other Service Groups?

     5.2. types.cf

    The file types.cf, which is listed in the include statement in the main.cf file, defines the VCS bundled types for VCS resources. The file types.cf is also located in the folder /etc/VRTSvcs/conf/config.

    5.3. Other Important files
        /etc/llthosts—lists all the nodes in the cluster
        /etc/llttab—describes the local system’s private network links to the other nodes in the cluster

Wednesday 5 October 2011

VERITAS Volume Manager for Solaris

Veritas Volume Manager is a storage management application by symantec ,  which allows you to manage physical disks as logical devices called volumes.

VxVM uses two types of objects to perform the storage management
1. Physical objects - are direct mappings to physical disks
2 . Virtual objects - are volumes, plexes, subdisks and diskgroups.

a. Disk groups are composed of Volumes
b. Volumes are composed of Plexes and Subdisks
c. Plexes are composed of SubDisks
d. Subdisks are actual disk space segments of VxVM disk  ( directly mapped from the physical disks)

1. Physical Disks
Physical disk is a basic storage where ultimate data will be stored. In Solaris physical disk names  uses the  convention like “c#t#d#”  where c# refers to controller/adapter connection, t# refers to the SCSI target Id , and d# refers to disk device Id.  

Physical disks could be coming from different sources within the servers e.g. Internal disks to the server , Disks from the Disk Array  and Disks from the SAN.

Check if the disks are recognized by Solaris

#echo|format
Searching for disks…done

AVAILABLE DISK SELECTIONS:
0. c0t0d0 <SUN2.1G cyl 2733 alt 2 hd 19 sec 80>
/sbus@1f,0/SUNW,fas@e,8800000/sd@0,0
1. c0t1d0 <SUN9.0G cyl 4924 alt 2 hd 27 sec 133>
/sbus@1f,0/SUNW,fas@e,8800000/sd@1,0
 
2. Solaris Native Disk Partitioning

In solaris, physical disks will partitioned into slices numbered as S0,S1,S3,S4,S5,S6,S7 and the slice number S2 normally called as overlap slice and points to the entire disk.  In Solaris we use the format utility used to partition the physical disks into slices.

Once we added new disks to the Server, first we should recognize the disks from the solaris level before proceeding for any other storage management utility.

Steps to add new disk to Solaris:
If the disks that are recently added to the server not visible, you can use below procedure
 
Option 1: Reconfiguration Reboot ( for the server hardware models that doesn’t support hot swapping/dynamic addition of disks )

# touch /reconfigure; init 6

or

#reboot — -r ( only if no applications running on the machine)

Option 2: Recognize  the disks added to external SCSI, without reboot

# devfsadm

# echo | format <== to check the newly added disks

Option 3: Recognize disks that are added to internal scsi, hot swappable, disk connections.

Just run the command “cfgadm -al” and check for any newly added devices in “unconfigured” state, and configure them.

# cfgadm -al
Ap_Id                         Type            Receptacle   Occupant     Condition
c0                                  scsi-bus     connected    configured   unknown
c0::dsk/c0t0d0      disk              connected    configured   unknown
c0::dsk/c0t0d0      disk              connected    configured   unknown
c0::rmt/0                  tape             connected    configured   unknown
c1                                  scsi-bus      connected    configured   unknown
c1::dsk/c1t0d0       unavailable  connected    unconfigured unknown <== disk not configured
c1::dsk/c1t1d0       unavailable  connected    unconfigured unknown < == disk not configured

# cfgadm -c configure c1::dsk/c1t0d0

# cfgadm -c configure c1::dsk/c1t0d0

# cfgadm -al
Ap_Id                         Type            Receptacle   Occupant     Condition
c0                                  scsi-bus     connected    configured   unknown
c0::dsk/c0t0d0      disk              connected    configured   unknown
c0::rmt/0                  tape             connected    configured   unknown
c1                                  scsi-bus      connected    configured   unknown
c1::dsk/c1t0d0       disk              connected    configured unknown  <= Disk configured now
c1::dsk/c1t1d0       disk              connected    configured unknown  <= Disk configured now

# devfsadm

#echo|format <== now you should see all the disks connected to the server


3. Initialize Physical Disks under VxVM control


A formatted physical disk is considered uninitialized until it is initialized for use by VxVM. When a disk is initialized, partitions for the public and private regions are created, VM disk header information is written to the private region and actual data is written to Public region.  During the notmal initialization process any data or partitions that may have existed on the disk are removed.

Note: Encapsulation is another method of placing a disk under VxVM control in which existing data on the disk is preserved

An initialized disk is placed into the VxVM free disk pool. The VxVM free disk pool contains disks that have been initialized but that have not yet been assigned to a disk group. These disks are under Volume Manager control but cannot be used by Volume Manager until they are added to a disk group

Device Naming Schemes
In VxVM, device names can be represented in two ways:

    Using the traditional operating system-dependent format c#t#d#
    Using an operating system-independent format that is based on enclosure names

c#t#d# Naming Scheme
Traditionally, device names in VxVM have been represented in the way that the operating system represents them. For example, Solaris and HP-UX both use the format c#t#d# in device naming, which is derived from the controller, target, and disk number. In VxVM version 3.1.1 and earlier, all disks are named using the c#t#d# format. VxVM parses disk names in this format to retrieve connectivity information for disks.

Enclosure-Based Naming Scheme
With VxVM version 3.2 and later, VxVM provides a new device naming scheme, called enclosure-based naming. With enclosure-based naming, the name of a disk is based on the logical name of the enclosure, or disk array, in which the disk resides.

Steps to Recognize new disks under VxVM control
1. Run the below command to see the available disks under VxVM control

# vxdisk list
in the output you will see below status

    error indicates that the disk has neither been initialized nor encapsulated by VxVM. The disk is uninitialized.
    online indicates that the drive has been initialized or encapsulated.
    online invalid indicated that disk is visible to VxVM but not controlled by VxVM

If disks are visible with “format” command but not visible with  ”vxdisk list” command, run below command to scan the new disks for VxVM

# vxdctl enable

Now you should see new disks with the status of “Online Invalid“

2. Initialize each disk with “vxdisksetup” command

#/etc/vx/bin/vxdisksetup -i <disk_address>

after running this command “vxdisk list” should see the status as “online” for all the newly initialized disks

4. Virtual Objects (DiskGroups / Volumes / Plexs )  in VxVM

Disk GroupsA disk group is a collection of  VxVM disks ( going forward we will call them as VM Disks ) that share a common configuration.  Disk groups allow you to group disks into logical group of Subdisks called plexes which in turn forms the volumes.

Volumes
A volume is a virtual disk device that appears to applications, databases, and file systems like a physical disk device, but does not have the physical limitations of a physical disk device. A volume consists of one or more plexes, each holding a copy of the selected data in the volume.

Plexes:
VxVM uses subdisks to create virtual objects called plexes. A plex consists of one or more subdisks located on one or more physical disks.



Key Points on Transformation of Physical disks into Veritas Volumes
1. Recognize disks under solaris using devfsadm, cfgadm or reconfiguration reboot , and verify using format command
2. Recognize the disks under VxVM using “vxdctl enable“
3. Initialize the disks under VxVM using vxdisksetup
4. Add the disks to Veritas Disk Group using vxdg commands
5. Create Volumes under Disk Group using vxmake or vxassist commands
6. Create filesystem on top of volumes using mkfs or newfs, and you can create either VXFS filesystem or UFS filesystem