Device Mapper Multipathing
(DM-Multipath) allows you to configure multiple I/O paths between server nodes
and storage arrays into a single device. These I/O paths are physical SAN
connections that can include separate cables, switches, and controllers. Multipathing
aggregates the I/O paths, creating a new device that consists of the aggregated
paths.
Overview of DM-Multipath
DM-Multipath can be used to provide:
Redundancy
DM-Multipath can provide failover in an
active/passive configuration. In an active/passive configuration, only half the
paths are used at any time for I/O. If any element of an I/O path (the cable,
switch, or controller) fails, DM-Multipath switches to an alternate path.
Improved Performance
DM-Multipath can be configured in
active/active mode, where I/O is spread over the paths in a round-robin
fashion. In some configurations, DM-Multipath can detect loading on the I/O
paths and dynamically re-balance the load.
DM-Multipath Setup Overview
DM-Multipath includes compiled-in default
settings that are suitable for common multipath configurations. Setting up
DM-multipath is often a simple procedure.
The basic procedure for configuring your
system with DM-Multipath is as follows:
1.Install device-mapper-multipath rpm.
2.Edit the multipath.conf configuration file:
- comment out the default blacklist
- change any of the existing defaults as needed
- save the configuration file
3.Start the multipath daemons.
4.Create the multipath device with the multipath
command.
Multipath Device Identifiers
Each multipath device has a World Wide
Identifier (WWID), which is guaranteed to be globally unique and unchanging. By
default, the name of a multipath device is set to its WWID. Alternately, you
can set the user_friendly_names option in the multipath configuration file,
which sets the alias to a node-unique name of the form mpath n.
For example, a node with two HBAs attached to
a storage controller with two ports via a single unzoned FC switch sees four
devices: /dev/sda, /dev/sdb, dev/sdc, and /dev/sdd. DM-Multipath creates a
single device with a unique WWID that reroutes I/O to those four underlying
devices according to the multipath configuration. When the user_friendly_names
configuration option is set to yes, the name of the multipath device is set to
mpath n.
When new devices are brought under the control
of DM-Multipath, the new devices may be seen in three different places under
the /dev directory: /dev/mapper/mpath n, /dev/mpath/mpath n, and /dev/dm- n.
- The devices in /dev/mapper are created early in the boot process. Use these devices to access the multipathed devices, for example when creating logical volumes.
- The devices in /dev/mpath are provided as a convenience so that all multipathed devices can be seen in one directory. These devices are created by the udev device manager and may not be available on startup when the system needs to access them. Do not use these devices for creating logical volumes or filesystems.
- Any devices of the form /dev/dm- n are for internal use only and should never be used.
You can also set the name of a
multipath device to a name of your choosing by using the alias option in the
multipaths section of the multipath configuration file
Consistent Multipath Device Names
in a Cluster
When the user_friendly_names configuration
option is set to yes, the name of the multipath device is unique to a node, but
it is not guaranteed to be the same on all nodes using the multipath device.
Similarly, if you set the alias option for a device in the multipaths section
of the multipath.conf configuration file, the name is not automatically
consistent across all nodes in the cluster. This should not cause any
difficulties if you use LVM to create logical devices from the multipath
device, but if you require that your multipath device names be consistent in
every node it is recommended that you not set the user_friendly_names option to
yes and that you not configure aliases for the devices. By default, if you do
not set user_friendly_names to yes or configure an alias for a device, a device
name will be the WWID for the device, which is always the same.
If you want the system-defined user-friendly
names to be consistent across all nodes in the cluster, however, you can follow
this procedure:
1.Set up all of the multipath devices on one
machine.
2.Disable all of your multipath devices on your
other machines by running the following commands:
# service multipathd stop
# multipath -F
3.Copy the bindings file from the first machine
to all the other machines in the cluster. By default, the location of this file
is /var/lib/multipath/bindings. If /var is a separate partition on your system,
however, you should change this value with the bindings_file option in the
defaults section of the multipath.conf configuration file, This file needs to be located on
your root file system partition, for example:
bindings_file
"/etc/multipath_bindings"
4.Re-enable the multipathd daemon on all the
other machines in the cluster by running the following command:
# service mutipathd start
If you add a new device, you will need to
repeat this process.
Similarly, if you configure an alias for a
device that you would like to be consistent across the nodes in the cluster,
you should ensure that the /etc/multipath.conf file is the same for each node
in the cluster by following the same procedure:
1.Configure the aliases for the multipath
devices in the in the multipath.conf file on one machine.
2.Disable all of your multipath devices on your
other machines by running the following commands:
# service multipathd stop
# multipath -F
# service mutipathd start
When you add a new device you will need to
repeat this process.
Multipath Devices in Logical
Volumes
After creating multipath devices, you can use
the multipath device names just as you would use a physical device name when
creating an LVM physical volume. For example, if /dev/mapper/mpath0 is the name
of a multipath device, the following command will mark /dev/mapper/mpath0 as a
physical volume.
pvcreate /dev/mapper/mpath0
You can use the resulting LVM physical device
when you create an LVM volume group just as you would use any other LVM
physical device.
Note
If you attempt to create an LVM physical
volume on a whole device on which you have configured partitions, the pvcreate
command will fail. Note that the Anaconda and Kickstart installation programs
create empty partition tables if you do not specify otherwise for every block
device. If you wish to use the whole device rather than a partition, you must
remove the existing partitions from the device. You can remove existing
partitions with the kpartx -d and the fdisk commands. If your system has block
devices that are greater that 2Tb, you can use the parted command to remove
partitions.
When you create an LVM logical volume that
uses active/passive multipath arrays as the underlying physical devices, you
should include filters in the lvm.conf to exclude the disks that underlie the
multipath devices. This is because if the array automatically changes the
active path to the passive path when it receives I/O, multipath will failover
and failback whenever LVM scans the passive path if these devices are not filtered.
For active/passive arrays that require a command to make the passive path
active, LVM prints a warning message when this occurs.
To filter all SCSI devices in the LVM
configuration file ( lvm.conf), include the following filter in the devices section
of the file.
filter = [ "r/disk/",
"r/sd.*/", "a/.*/" ]
Setting Up DM-Multipath
Before setting up DM-Multipath on your system,
ensure that your system has been updated and includes the
device-mapper-multipath package.
Use the following procedure to set up
DM-Multipath for a basic failover configuration.
1.Edit the /etc/multipath.conf file by commenting
out the following lines at the top of the file. This section of the
configuration file, in its initial state, blacklists all devices. You must
comment it out to enable multipathing.
blacklist {
devnode "*"
}
After commenting out those lines, this section
appears as follows.
# blacklist {
# devnode "*"
# }
2.The default settings for DM-Multipath are
compiled in to the system and do not need to be explicitly set in the
/etc/multipath.conf file.
The default value of path_grouping_policy is
set to failover, so in this example you do not need to change the default
value. For information on changing the values in the configuration file to
something other than the defaults.
The initial defaults section of the
configuration file configures your system that the names of the multipath
devices are of the form mpath n; without this setting, the names of the
multipath devices would be aliased to the WWID of the device.
3.Save the configuration file and exit the editor.
modprobe dm-multipath
service multipathd start
multipath -v2
The multipath -v2 command prints out
multipathed paths that show which devices are multipathed. If the command does
not print anything out, ensure that all SAN connections are set up properly and
the system is multipathed.
chkconfig multipathd on
Since the value of user_friendly_name is set
to yes in the configuration file the multipath devices will be created as
/dev/mapper/mpath n.
Ignoring Local Disks when Generating Multipath Devices
Some machines have local SCSI cards for their internal disks. DM-Multipath is not recommended for these devices. The following procedure shows how to modify the multipath configuration file to ignore the local disks when configuring multipath.
In this example, /dev/sda is the internal
disk. Note that as originally configured in the default multipath configuration
file, executing the multipath -v2 shows the local disk, /dev/sda, in the
multipath map.
[root@rh4cluster1 ~]# multipath
-v2
create:
SIBM-ESXSST336732LC____F3ET0EP0Q000072428BX1
[size=33
GB][features="0"][hwhandler="0"]
\_ round-robin 0
\_ 0:0:0:0 sda 8:0 [---------
device-mapper ioctl cmd 9 failed:
Invalid argument
device-mapper ioctl cmd 14
failed: No such device or address
create:
3600a0b80001327d80000006d43621677
[size=12
GB][features="0"][hwhandler="0"]
\_ round-robin 0
\_ 2:0:0:0 sdb 8:16
\_ 3:0:0:0 sdf 8:80
create:
3600a0b80001327510000009a436215ec
[size=12
GB][features="0"][hwhandler="0"]
\_ round-robin 0
\_ 2:0:0:1 sdc 8:32
\_ 3:0:0:1 sdg 8:96
create:
3600a0b80001327d800000070436216b3
[size=12
GB][features="0"][hwhandler="0"]
\_ round-robin 0
\_ 2:0:0:2 sdd 8:48
\_ 3:0:0:2 sdh 8:112
create:
3600a0b80001327510000009b4362163e
[size=12
GB][features="0"][hwhandler="0"]
\_ round-robin 0
\_ 2:0:0:3 sde 8:64
\_ 3:0:0:3 sdi 8:128
2.In order to prevent the device mapper from
mapping /dev/sda in its multipath maps, edit the blacklist section of the
/etc/multipath.conf file to include this device. Although you could blacklist
the sda device using a devnode type, that would not be safe procedure since
/dev/sda is not guaranteed to be the same on reboot. To blacklist individual
devices, you can blacklist using the WWID of that device.
Note that in the output to the multipath -v2
command, the WWID of the /dev/sda device is
SIBM-ESXSST336732LC____F3ET0EP0Q000072428BX1. To blacklist this device, include
the following in the /etc/multipath.conf file.
blacklist {
wwid
SIBM-ESXSST336732LC____F3ET0EP0Q000072428BX1
}
3.After you have updated the /etc/multipath.conf
file, you must manually tell the multipathd daemon to reload the file. The
following command reloads the updated /etc/multipath.conf file.
service multipathd reload
multipath -F
multipath -v2
The local disk or disks should no longer be
listed in the new multipath maps, as shown in the following example.
[root@rh4cluster1 ~]# multipath
-F
[root@rh4cluster1 ~]# multipath
-v2
create:
3600a0b80001327d80000006d43621677
[size=12
GB][features="0"][hwhandler="0"]
\_ round-robin 0
\_ 2:0:0:0 sdb 8:16
\_ 3:0:0:0 sdf 8:80
create:
3600a0b80001327510000009a436215ec
[size=12
GB][features="0"][hwhandler="0"]
\_ round-robin 0
\_ 2:0:0:1 sdc 8:32
\_ 3:0:0:1 sdg 8:96
create: 3600a0b80001327d800000070436216b3
[size=12
GB][features="0"][hwhandler="0"]
\_ round-robin 0
\_ 2:0:0:2 sdd 8:48
\_ 3:0:0:2 sdh 8:112
create:
3600a0b80001327510000009b4362163e
[size=12
GB][features="0"][hwhandler="0"]
\_ round-robin 0
\_ 2:0:0:3 sde 8:64
\_ 3:0:0:3 sdi 8:128
Adding Devices to the
Multipathing Database
By default, DM-Multipath includes support for
the most common storage arrays that support DM-Multipath. The default
configuration values, including supported devices, can be found in the
multipath.conf.defaults file.
If you need to add a storage device that is
not supported by default as a known multipath device, edit the
/etc/multipath.conf file and insert the appropriate device information.
For example, to add information about the HP
Open-V series the entry looks like this:
devices {
device {
vendor "HP"
product "OPEN-V."
getuid_callout
"/sbin/scsi_id -g -u -p0x80 -s /block/%n"
}
}
Configuration File Overview
The multipath configuration file is divided
into the following sections:
blacklist
Listing of specific devices that will not be
considered for multipath. By default all devices are blacklisted. Usually the
default blacklist section is commented out.
blacklist_exceptions
Listing of multipath candidates that would
otherwise be blacklisted according to the parameters of the blacklist section.
defaults
General default settings for DM-Multipath.
multipaths
Settings for the characteristics of individual
multipath devices. These values overwrite what is specified in the defaults and
devices sections of the configuration file.
devices
Settings for the individual storage
controllers. These values overwrite what is specified in the defaults section
of the configuration file. If you are using a storage array that is not
supported by default, you may need to create a devices subsection for your
array.
When the system determines the attributes of a
multipath device, first it checks the multipath settings, then the per devices
settings, then the multipath system defaults.
Troubleshooting with the multipathd Interactive Console
Resizing an Online Multipath
Device
If you need to resize an online multipath
device, use the following procedure.
# multipath -l
# echo 1 >
/sys/block/device_name/device/rescan
# multipathd -k'resize map
mpath0'
# resize2fs /dev/mapper/mpath0
Multipath Command Output
When you create, modify, or list a multipath
device, you get a printout of the current device setup. The format is as
follows.
For each multipath device:
action_if_any: alias
(wwid_if_different_from_alias) [size][features][hardware_handler]
For each path group:
\_ scheduling_policy
[path_group_priority_if_known] [path_group_status_if_known]
For each path:
\_ host:channel:id:lun devnode
major:minor [path_status] [dm_status_if_known]
For example, the output of a multipath command
might appear as follows:
mpath1
(3600d0230003228bc000339414edb8101) [size=10
GB][features="0"][hwhandler="0"]
\_ round-robin 0 [prio=1][active]
\_ 2:0:0:6 sdb 8:16 [active][ready]
\_ round-robin 0
[prio=1][enabled]
\_ 3:0:0:6 sdc 8:64 [active][ready]
If the path is up and ready for I/O, the
status of the path is ready or active. If the path is down, the status is
faulty or failed. The path status is updated periodically by the multipathd
daemon based on the polling interval defined in the /etc/multipath.conf file.
The dm status is similar to the path status,
but from the kernel's point of view. The dm status has two states: failed,
which is analogous to faulty, and active which covers all other path states.
Occasionally, the path state and the dm state of a device will temporarily not
agree.
Note
When a multipath device is being created or
modified, the path group status and the dm status are not known. Also, the
features are not always correct. When a multipath device is being listed, the
path group priority is not known.
Multipath Queries with multipath Command
You can use the -l and -ll options of the
multipath command to display the current multipath configuration. The -l option
displays multipath topology gathered from information in sysfs and the device
mapper. The -ll option displays the information the -l displays in addition to
all other available components of the system.
When displaying the multipath configuration,
there are three verbosity levels you can specify with the -v option of the
multipath command. Specifying -v0 yields no output. Specifying -v1 outputs the
created or updated multipath names only, which you can then feed to other tools
such as kpartx. Specifying -v2 prints all detected paths, multipaths, and
device maps.
The following example shows the output of a
multipath -l command.
# multipath -l
mpath1
(3600d0230003228bc000339414edb8101)
[size=10
GB][features="0"][hwhandler="0"]
\_ round-robin 0 [prio=1][active]
\_ 2:0:0:6 sdb 8:16 [active][ready]
\_ round-robin 0
[prio=1][enabled]
\_ 3:0:0:6 sdc 8:64 [active][ready]
Multipath Command Options
Table 5.1. Useful multipath
Command Options
Option Description
Option Description
-l Display the
current multipath configuration gathered from sysfs and the device mapper.
-ll Display the current
multipath configuration gathered from sysfs, the device mapper, and all other
available components on the system.
-f device Remove the named
multipath device.
-F Remove all unused
multipath devices.
Determining Device Mapper
Entries with the dmsetup Command
You can use the dmsetup command to find out
which device mapper entries match the multipathed devices.
The following command displays all the device
mapper devices and their major and minor numbers. The minor numbers determine
the name of the dm device. For example, a minor number of 3 corresponds to the
multipathed device /dev/dm-3.
# dmsetup ls
mpath2 (253, 4)
mpath4p1 (253, 12)
mpath5p1 (253, 11)
mpath1 (253, 3)
mpath6p1 (253, 14)
mpath7p1 (253, 13)
mpath0 (253, 2)
mpath7 (253, 9)
mpath6 (253, 8)
VolGroup00-LogVol01 (253, 1)
mpath5 (253, 7)
VolGroup00-LogVol00 (253, 0)
mpath4 (253, 6)
mpath1p1 (253, 10)
mpath3 (253, 5)
Thank you for such a wonderful Information !!
ReplyDeleteHere is a list of Top LINUX INTERVIEW QUESTIONS
SAMBA Server Interview Questions
Linux FTP vsftpd Interview Questions
SSH Interview Questions
Apache Interview Questions
Nagios Interview questions
IPTABLES Interview Questions
Ldap Server Interview Questions
LVM Interview questions
Sendmail Server Interview Questions
YUM Interview Questions
NFS Interview Questions
Read More at :- Linux Troubleshooting