ISCSI用法及简单配置

ISCSI(Internet Small Computer System Interface) 是一种 SAN(Storage Area network) 的实现

环境:

OS: CentOS6.7
node1: 10.11.8.187 (target)
node2: 10.11.8.186 (initiator)
node3: 10.11.8.200 (initiator)

安装:

target(即服务端, 提供存储): scsi-target-utils

initiator(即使用端): iscsi-initiator-utils

target 端:

服务脚本: /etc/init.d/tgtd

管理命令: tgtadm

tgtadm模式化的命令
-m, –mode [target、logicalunit、account]

  • target –op [new、delete、show、update、bind、unbind]

  • logicalunit –op [new、delete]

  • account –op [new、delete、bind、unbind]

-L, –lld <driver>
-t, –tid
-l, –lun
-b, –backing-store <path>
-I, –initiator-address <address>
-T, –targetname <targetname>

initiator 端:

服务脚本: /etc/init.d/iscsid

管理命令: iscsiadm

iscsiadm模式化的命令
-m {discovery|node|session|iface}

  • discovery: 发现某服务器是否有target输出,以及输出了哪些target;

  • node: 管理跟某target的关联关系;

  • session: 会话管理

  • iface: 接口管理

iscsiadm -m discovery [ -d debug_level ] [ -P printlevel ] [ -I iface -t type -p ip:port [ -l ] ]

  • -d: 0-8

  • -I: Network interface

  • -t type: SendTargets(st), SLP, and iSNS

  • -p: IP:port

对所有的操作
iscsiadm -m node [ -d debug_level ] [ -L all,manual,automatic ] | [ -U all,manual,automatic ]
对单个的操作
iscsiadm -m node [ -d debug_level ] [ [ -T targetname -p ip:port -I ifaceN ] [ -l | -u ] ] [ [ -o operation ] [ -n name ] [ -v value ] ]

iscsi-initiator-utils:
不支持discovery认证;
如果使用基于用户的认证,必须首先开放基于IP的认证;

node1 配置 target 服务:

1.创建一个控制器

[root@node1 ~]# tgtadm --lld iscsi --mode target --op new --targetname iqn.2016.com.shiina:storage.disk1 --tid 1
[root@node1 ~]# tgtadm --lld iscsi --mode target --op show
Target 1: iqn.2016.com.shiina:storage.disk1
    System information:
        Driver: iscsi
        State: ready
    I_T nexus information:
    LUN information:
        LUN: 0
            Type: controller
            SCSI ID: IET     00010000
            SCSI SN: beaf10
            Size: 0 MB, Block size: 1
            Online: Yes
            Removable media: No
            Prevent removal: No
            Readonly: No
            Backing store type: null
            Backing store path: None
            Backing store flags:
    Account information:
    ACL information:

2.创建一块存储设备

[root@node1 ~]# tgtadm --lld iscsi --mode logicalunit --op new --tid 1 --lun 1 --backing-store /dev/sdb
[root@node1 ~]# tgtadm --lld iscsi --mode target --op show
Target 1: iqn.2016.com.shiina:storage.disk1
    System information:
        Driver: iscsi
        State: ready
    I_T nexus information:
    LUN information:
        LUN: 0
            Type: controller
            SCSI ID: IET     00010000
            SCSI SN: beaf10
            Size: 0 MB, Block size: 1
            Online: Yes
            Removable media: No
            Prevent removal: No
            Readonly: No
            Backing store type: null
            Backing store path: None
            Backing store flags:
        LUN: 1
            Type: disk
            SCSI ID: IET     00010001
            SCSI SN: beaf11
            Size: 8590 MB, Block size: 512
            Online: Yes
            Removable media: No
            Prevent removal: No
            Readonly: No
            Backing store type: rdwr
            Backing store path: /dev/sdb
            Backing store flags:
    Account information:
    ACL information:

3.添加访问控制

[root@node1 ~]# tgtadm --lld iscsi --mode target --op bind --tid 1 --initiator-address 10.11.0.0/16
[root@node1 ~]# tgtadm --lld iscsi --mode target --op show
Target 1: iqn.2016.com.shiina:storage.disk1
    System information:
        Driver: iscsi
        State: ready
    I_T nexus information:
    LUN information:
        LUN: 0
            Type: controller
            SCSI ID: IET     00010000
            SCSI SN: beaf10
            Size: 0 MB, Block size: 1
            Online: Yes
            Removable media: No
            Prevent removal: No
            Readonly: No
            Backing store type: null
            Backing store path: None
            Backing store flags:
        LUN: 1
            Type: disk
            SCSI ID: IET     00010001
            SCSI SN: beaf11
            Size: 8590 MB, Block size: 512
            Online: Yes
            Removable media: No
            Prevent removal: No
            Readonly: No
            Backing store type: rdwr
            Backing store path: /dev/sdb
            Backing store flags:
    Account information:
    ACL information:
        10.11.0.0/16

tgtadm 所做的修改全都是在内核中保存的, 重启服务器后所有记录将会消失, 若想自动生效, 需要将配置写在 /etc/tgt/target.conf中, 以上的简单配置在文件中可以这样定义:

<target iqn.2016.com.shiina:storage.disk1>
    backing-store /dev/sdb
    initiator-address 10.11.0.0/16
</target>

详细配置文件内有事例

node2 配置 initiator:

iscsi-iname 命令: 生成initiator-name

-p: 指定名前缀

1.生成InitiatorName

[root@node2 ~]# echo "InitiatorName=`iscsi-iname -p iqn.2016.com.shiina`" > /etc/iscsi/initiatorname.iscsi
[root@node2 ~]# cat /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.2016.com.shiina:60362cc811e4

2.搜索发现 target:

[root@node2 ~]# iscsiadm -m discovery -t sendtargets -p 10.11.8.187
10.11.8.187:3260,1 iqn.2016.com.shiina:storage.disk1
[root@node2 ~]# ll /var/lib/iscsi/send_targets/10.11.8.187,3260/
total 8
lrwxrwxrwx 1 root root  73 May 28 20:44 iqn.2016.com.shiina:storage.disk1,10.11.8.187,3260,1,default -> /var/lib/iscsi/nodes/iqn.2016.com.shiina:storage.disk1/10.11.8.187,3260,1
-rw------- 1 root root 554 May 28 20:44 st_config

3.登录: 表示将远程主机target对应的lun关联到本机

[root@node2 ~]# iscsiadm -m node -T iqn.2016.com.shiina:storage.disk1 -p 10.11.8.187 -l
Logging in to [iface: default, target: iqn.2016.com.shiina:storage.disk1, portal: 10.11.8.187,3260] (multiple)
Login to [iface: default, target: iqn.2016.com.shiina:storage.disk1, portal: 10.11.8.187,3260] successful.
[root@node2 ~]# fdisk -l
 
Disk /dev/sda: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0000bfe1
 
   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          26      204800   83  Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2              26         157     1048576   82  Linux swap / Solaris
Partition 2 does not end on cylinder boundary.
/dev/sda3             157        2611    19717120   8e  Linux LVM
 
Disk /dev/mapper/vol0-root: 5242 MB, 5242880000 bytes
255 heads, 63 sectors/track, 637 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
 
 
Disk /dev/mapper/vol0-usr: 14.9 GB, 14944305152 bytes
255 heads, 63 sectors/track, 1816 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
 
 
Disk /dev/sdb: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xfa870733
 
   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1        1044     8385898+  83  Linux
PS: sdb1 是 node1 上一个尚未格式化的分区

4.格式化并写入数据

[root@node2 ~]# mke2fs -j /dev/sdb1
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
524288 inodes, 2096474 blocks
104823 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2147483648
64 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632
 
Writing inode tables: done                          
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
 
This filesystem will be automatically checked every 22 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.
[root@node2 ~]# mount /dev/sdb1 /mnt
[root@node2 ~]# cp /etc/fstab /mnt/

node3 配置initiator并登录:

[root@node3 ~]# iscsiadm -m discovery -t sendtargets -p 10.11.8.187
Starting iscsid:                                           [  OK  ]
10.11.8.187:3260,1 iqn.2016.com.shiina:storage.disk1
[root@node3 ~]# iscsiadm -m node -T iqn.2016.com.shiina:storage.disk1 -p 10.11.8.187 -l
Logging in to [iface: default, target: iqn.2016.com.shiina:storage.disk1, portal: 10.11.8.187,3260] (multiple)
Login to [iface: default, target: iqn.2016.com.shiina:storage.disk1, portal: 10.11.8.187,3260] successful.
[root@node3 ~]# mount /dev/sdb1 /mnt
[root@node3 ~]# ls /mnt/
fstab  lost+found # 此时fatab文件存在

在 node2 和 node3 同时挂载的情况下,在 node3 写入数据:

[root@node3 ~]# cp /etc/inittab /mnt/
[root@node3 ~]# ls /mnt/
fstab  inittab  lost+found
[root@node2 ~]# ls /mnt/
fstab  lost+found

此时查看 node2 却发现并无 inittab 文件, 说明 node3 的数据写入依然停留在内存中, 尚未同步到硬盘

此种情况下容易造成文件系统崩溃, 因此只允许一台设备挂载, 若希望同时挂载, 则需安装集群文件系统

    原文作者:shiina
    原文地址: https://segmentfault.com/a/1190000005668342
    本文转自网络文章,转载此文章仅为分享知识,如有侵权,请联系博主进行删除。
点赞