Manage zpool and zfs in solaris 10
What is ZFS:
ZFS is a new kind of file system that provides simple administration, transactional semantics, end-to-end data integrity, and immense scalability. ZFS is not an incremental improvement to existing technology; it is a fundamentally new approach to data management
ZFS is managed by two commands zpool and zfs
zpool- Manages ZFS pool and devices within them
zfs – Manages zfs filesystem
In this example I will be using filename as disk1, disk2, disk3 and disk4.
1. Create four 128mb files.
# mkfile 128m /export/home/disk1
# mkfile 128m /export/home/disk2
# mkfile 128m /export/home/disk3
# mkfile 128m /export/home/disk4
Pools:
Before creating new pools look for existing pool
#zpool list
no pools available
Create zpool
# zpool create data /export/home/disk1
# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
data 123M 75K 123M 0% ONLINE –
# df -ah
You can see no volume management,newfs or mount is required in zfs. Now you have working pool with mounted zfs filesystem.
A pool with single disk does not offer redundancy. So we will mirror it.
# zpool destroy data
# zpool list
no pools available
Mirror the pool
# zpool create data mirror /export/home/disk1 /export/home/disk2
# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
data 123M 76.5K 123M 0% ONLINE –
Check the status of zpool
# zpool status data
pool: data
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
data ONLINE 0 0 0
mirror ONLINE 0 0 0
/export/home/disk1 ONLINE 0 0 0
/export/home/disk2 ONLINE 0 0 0
errors: No known data errors
Now we can see that pool conatins mirror of two disks. Now we will create a file to see how use changes
# mkfile 32m /data/foo
# zpool status
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
data 123M 28.7M 94.3M 23% ONLINE –
We can detattach the device
# zpool detach data /export/home/disk1
# zpool status
zpool status
pool: data
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
data ONLINE 0 0 0
/export/home/disk2 ONLINE 0 0 0
errors: No known data errors
To attach another disk or same disk
# zpool attach data /export/home/disk2 /export/home/disk1
Adding to a mirror pool:
You can add disk to a pool without taking it offline
# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
data 123M 32.2M 90.8M 26% ONLINE –
# zpool add data mirror /export/home/disk3 /export/home/disk4
# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
data 246M 32.2M 214M 13% ONLINE –
# zpool status data
pool: data
state: ONLINE
scrub: resilver completed with 0 errors on Tue Aug 31 13:57:01 2010
config:
NAME STATE READ WRITE CKSUM
data ONLINE 0 0 0
mirror ONLINE 0 0 0
/export/home/disk2 ONLINE 0 0 0
/export/home/disk1 ONLINE 0 0 0
mirror ONLINE 0 0 0
/export/home/disk3 ONLINE 0 0 0
/export/home/disk4 ONLINE 0 0 0
errors: No known data errors
We can also see currently how much data is written
# zpool iostat -v data
capacity operations bandwidth
pool used avail read write read write
———————- —– —– —– —– —– —–
data 32.2M 214M 0 0 26.4K 758
mirror 32.2M 90.8M 0 1 78.4K 2.13K
/export/home/disk2 – – 0 2 54.3K 33.7K
/export/home/disk1 – – 0 3 665 86.3K
mirror 16K 123M 0 0 0 111
/export/home/disk3 – – 0 2 522 9.02K
/export/home/disk4 – – 0 2 92 9.02K
———————- —– —– —– —– —– —-
Now we will create zfs file system
# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
data 246M 32.2M 214M 13% ONLINE –
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
data 32.2M 182M 32.0M /data
# zfs create data/data1
# zfs create data/data2
# zfs create data/data3
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
data 32.2M 182M 32.0M /data
data/data1 21K 182M 21K /data/data1
data/data2 21K 182M 21K /data/data2
data/data3 21K 182M 21K /data/data3
Do df -ah and you will see the zfs file system mount point
# df -ah
We can remove zfs file system by command
# zfs destroy data/data3

Leave a Reply