實際環境是用Open Solaris建一個iscsi target server
然後試著用RHEL來抓這個server看看
由於RHEL應該鮮少作業系統有廣泛支援,得知Linux有support ZFS,所以就以Solaris的ZFS為主要Filesystem
ServerIP : 172.16.43.144 (OpenSolaris SunOS 5.10 @vmware)
ClientIP : 172.16.43.99 (RHEL 5.4 @dom0,建議使用此版本,因為kernel版本剛好有fuse支援)
在Server關機狀態下,先幫它加一個HDD(20G),加完後透過下面指令,
可以知道多了一筆記錄(new HDD c1t1d0)
1.查到HDD的ID
# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c1t0d0
/pci@0,0/pci15ad,1976@10/sd@0,0
1. c1t1d0
/pci@0,0/pci15ad,1976@10/sd@1,0
Specify disk (enter its number):
2.為zfs建立storage pools
zfs的精神在於,要先建storage pool,再從pool抓來建立共用的Lun(以下注意,實作過程都沒有格式化)
# zpool create iscsi-1 c1t1d0
確認建立的結果
# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
iscsi-1 19.9G 194K 19.9G 0% ONLINE -
建立一個zfs
# zfs create iscsi-1/public
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
iscsi-1 12.0G 7.56G 25.5K /iscsi-1
iscsi-1/public 12.0G 7.56G 24.5K /iscsi-1/public
3. 實作target server
把剛才抓出來的zfs利用iscsi共享出來,
(1)用這個方法共享,那這個zfs底下的所有Lun就會直接share至iscsi
也就是如果share在zfs,則底下的每個Lun就會直接被share至iscsi
# zfs set shareiscsi=on iscsi-1/public
建立第1個Lun
# zfs create -V 2g iscsi/public/sean
建立第2個Lun
# zfs create -V 10g iscsi/public/share
檢視結果
# iscsitadm list target
Target: iscsi-1/public/sean
iSCSI Name: iqn.1986-03.com.sun:02:cf6f7152-7468-4f02-e214-ff7d6784c92c
Connections: 0
Target: iscsi-1/public/share
iSCSI Name: iqn.1986-03.com.sun:02:15a4b01a-9fdf-4a42-ef74-e82a9cc81397
Connections: 0
(2)用這個方法共享,那這個每建一個Lun就要share一次
# zfs create -V 2g iscsi/public/sean
# zfs set shareiscsi=on iscsi-1/public/sean
共享設定完成後,接著要設定提供出來的iscsi管道
# iscsitadm create tpgt 1
1是target gruop id
在1這個target group設定pathing(連進來的都透過172.16.43.144這個網卡的IP)
# iscsitadm modify tpgt -i 172.16.43.144 1
如果要設定多網卡多IP(multi-pahting),那就上面的指令多敲幾個就好
確認設定結田
# iscsitadm list tpgt
TPGT: 1
IP Address count: 1
4.結果驗證,並確認RHEL可以抓得到
1)在OpenSolaris上看看target server shared狀態
# iscsitadm list target -v
Target: iscsi-1/public/sean
iSCSI Name: iqn.1986-03.com.sun:02:cf6f7152-7468-4f02-e214-ff7d6784c92c
Alias: iscsi-1/public/sean
Connections: 0
ACL list:
TPGT list:
LUN information:
LUN: 0
GUID: 0x0
VID: SUN
PID: SOLARIS
Type: disk
Size: 2.0G
Backing store: /dev/zvol/rdsk/iscsi-1/public/sean
Status: online
Target: iscsi-1/public/share
iSCSI Name: iqn.1986-03.com.sun:02:15a4b01a-9fdf-4a42-ef74-e82a9cc81397
Alias: iscsi-1/public/share
Connections: 0
ACL list:
TPGT list:
LUN information:
LUN: 0
GUID: 0x0
VID: SUN
PID: SOLARIS
Type: disk
Size: 10G
Backing store: /dev/zvol/rdsk/iscsi-1/public/share
Status: online
確認target server是不是開3260 port
# netstat -an | grep 3260
*.3260 *.* 0 0 49152 0 LISTEN
*.3260 *.* 0 0 49152 0 LISTEN
2)從RHEL上可以從下面操作來驗證
安裝iscsi套件
[root@localhost ~]# yum install iscsi-initiator-utils.i386 -y
啟動iscsi client服務
[root@localhost vlc-1.0.0-rc4]# service iscsid start
[root@localhost vlc-1.0.0-rc4]# chkconfig iscsid on
去找找到172.16.43.144是否真有提供iscsi
[root@localhost vlc-1.0.0-rc4]# iscsiadm --mode discovery --type sendtargets --portal 172.16.43.144
172.16.43.144:3260,1 iqn.1986-03.com.sun:02:cf6f7152-7468-4f02-e214-ff7d6784c92c
172.16.43.144:3260,1 iqn.1986-03.com.sun:02:15a4b01a-9fdf-4a42-ef74-e82a9cc81397
5.在RHEL上mount zfs
[root@localhost zfs]# ll
total 1468
-rwxrwxrwx 1 root root 1502751 Jun 18 16:27 zfs-fuse-0.6.0-6.el5.i386.rpm
[root@localhost zfs]# rpm -vih zfs-fuse-0.6.0-6.el5.i386.rpm
warning: zfs-fuse-0.6.0-6.el5.i386.rpm: Header V3 DSA signature: NOKEY, key ID 217521f6
Preparing... ########################################### [100%]
1:zfs-fuse ########################################### [100%]
[root@localhost zfs]#
備註:fuse版表示Filesystem in UserSpacE,那麼就表示不在Kernel space執行的意思了
啟動zfs服務
[root@localhost zfs]# service zfs-fuse status
zfs-fuse is stopped
[root@localhost zfs]# service zfs-fuse start
Starting zfs-fuse: [ OK ]
Immunizing zfs-fuse against OOM kills [ OK ]
Mounting zfs partitions: chkco [ OK ]
[root@localhost zfs]# chkconfig zfs-fuse on
[root@localhost zfs]# service zfs-fuse status
zfs-fuse (pid 2387) is running...
no pools available
[root@localhost zfs]#
登入target server(target name的iqn在前面就discover出來了)
[root@localhost zfs]# iscsiadm --mode node --targetname iqn.1986-03.com.sun:02:cf6f7152-7468-4f02-e214-ff7d6784c92c --portal 172.16.43.144 --login
Logging in to [iface: default, target: iqn.1986-03.com.sun:02:cf6f7152-7468-4f02-e214-ff7d6784c92c, portal: 172.16.43.144,3260]
Login to [iface: default, target: iqn.1986-03.com.sun:02:cf6f7152-7468-4f02-e214-ff7d6784c92c, portal: 172.16.43.144,3260]: successful
fdisk 會發現有無法解讀的table
[root@localhost zfs]# fdisk -l
Disk /dev/sda: 500.1 GB, 500106780160 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 102400 7 HPFS/NTFS
Partition 1 does not end on cylinder boundary.
/dev/sda2 13 13068 104857600 7 HPFS/NTFS
Partition 2 does not end on cylinder boundary.
/dev/sda3 13068 48053 281022464 7 HPFS/NTFS
Partition 3 does not end on cylinder boundary.
/dev/sda4 48054 60801 102392640 5 Extended
Partition 4 does not end on cylinder boundary.
/dev/sda5 48054 48067 105808+ 83 Linux
/dev/sda6 48067 60801 102286768+ 8e Linux LVM
Disk /dev/sdb: 2147 MB, 2147483648 bytes
67 heads, 62 sectors/track, 1009 cylinders
Units = cylinders of 4154 * 512 = 2126848 bytes
Disk /dev/sdb doesn't contain a valid partition table
有發現/dev/sdb這個,但無法解讀,不過沒關係就照作吧
[root@localhost zfs]# zpool status
no pools available
[root@localhost zfs]# zpool list
no pools available
[root@localhost zfs]# zpool create iscsi-clt /dev/sdb
[root@localhost zfs]# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
iscsi-clt 1.98G 75K 1.98G 0% ONLINE -
[root@localhost zfs]# zfs list
NAME USED AVAIL REFER MOUNTPOINT
iscsi-clt 70.5K 1.95G 21K /iscsi-clt
把它mount起來,並產生檔案試試
[root@localhost zfs]# zfs set mountpoint=/iscsi-mnt iscsi-clt
[root@localhost iscsi-mnt]# zfs mount iscsi-clt
[root@localhost iscsi-mnt]# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
94082868 8201004 81025648 10% /
/dev/sda5 102454 11983 85181 13% /boot
tmpfs 2068836 0 2068836 0% /dev/shm
/dev/sda3 281022460 201541340 79481120 72% /mnt/ntfs-280g
iscsi-clt 2047940 23 2047918 1% /iscsi-mnt
如果要umount可以下
[root@localhost iscsi-mnt]# zfs umount iscsi-clt
[root@localhost zfs]# cd /iscsi-mnt/
[root@localhost iscsi-mnt]# ls -l
total 0
[root@localhost iscsi-mnt]# touch 1234
-rw-r--r-- 1 root root 0 Jun 18 16:49 1234
[root@localhost iscsi-mnt]#[root@localhost iscsi-mnt]# cat 1234
fdklasjflasjdfkajsdfl;ajsdf
alsdfja;lsdkjf;aklsdf
askdf;jaskdf;jaskldf
vi 1234
可以把下列這幾行加到/etc/rc.local
iscsiadm --mode node --targetname iqn.1986-03.com.sun:02:cf6f7152-7468-4f02-e214-ff7d6784c92c --portal 172.16.43.144 --login
zpool create iscsi-clt /dev/sdb
#以下這行是為了保險,理當設定一次後就會KEEP下來,只要再zfs mount即可
zfs set mountpoint=/iscsi-mnt iscsi-clt
zfs mount iscsi-clt
如果要一寫多讀,那唯獨可以設為(在iscsi client)
zfs set readonly=on [pool/zfs]
重開機時zpool status 可以觀察到iscsi-clt的狀態,如果UNAVAIL那就表示可能iscsi client連線有問題嘍
重新login連線,再檢查一次zpool status應該可恢復正常
linux使用zfs所需套件
-rwxrwxrwx 1 root root 84548 May 6 04:38 fuse-2.7.4-8_12.el5.i386.rpm
-rwxrwxrwx 1 root root 27287 May 6 04:38 fuse-devel-2.7.4-8_12.el5.i386.rpm
-rwxrwxrwx 1 root root 28498 May 6 04:35 fuse-kmdl-2.6.18-164.el5-2.7.4-8_12.el5.i686.rpm
-rwxrwxrwx 1 root root 28560 May 6 04:36 fuse-kmdl-2.6.18-164.el5PAE-2.7.4-8_12.el5.i686.rpm
-rwxrwxrwx 1 root root 28545 May 6 04:36 fuse-kmdl-2.6.18-164.el5xen-2.7.4-8_12.el5.i686.rpm
-rwxrwxrwx 1 root root 28531 May 6 04:36 fuse-kmdl-2.6.18-194.el5-2.7.4-8_12.el5.i686.rpm
-rwxrwxrwx 1 root root 28562 May 6 04:36 fuse-kmdl-2.6.18-194.el5PAE-2.7.4-8_12.el5.i686.rpm
-rwxrwxrwx 1 root root 28570 May 6 04:36 fuse-kmdl-2.6.18-194.el5xen-2.7.4-8_12.el5.i686.rpm
-rwxrwxrwx 1 root root 72616 May 6 04:37 fuse-libs-2.7.4-8_12.el5.i386.rpm
-rwxrwxrwx 1 root root 1502751 Jun 18 16:27 zfs-fuse-0.6.0-6.el5.i386.rpm
如果在mount zfs有下面錯誤訊息時
[root@HTS171 ~]# zfs mount iscsi-clt
cannot mount 'iscsi-clt': Input/output error.
Make sure the FUSE module is loaded.
那表示fuse-kmdl-xxx沒安裝(它需要fuse.ko)
fuse-kmdl-xxx請特別注意要跟kernel版本搭配(uname -a)
文章:
http://blog.xuite.net/ibjenny.cheng/vserver/22157067?ref=rel
http://www.wretch.cc/blog/wenbinlai/11660199
http://expert.lccnet.com.tw/zone/viewthread.php?tid=13224
http://forum.homeserver.com.tw/index.php?PHPSESSID=4cb8l8i2ljeov46q76u5trdac5&board=29.0
http://ian.testers.homelinux.net/wiki/view/Cluster_filesystem
Solaris FileSystem Choice
[cluster filesystem]
GFS
Lustre
UFS(globe option)
[non-cluster filesystem]
ZFS
NTFS
EXT3