ZFS
Since mid november, Sun released Nevada build 27, which contained the source code of the anticipated ZFS file system. ZFS is IMO a radically new and revolutionary filesystem which completely eliminates the concept of volumes and the associated problems of partitions, all operations are copy-on-write transactions, so the on-disk state is always valid. There is no need to fsck a ZFS filesystem ever. Every block is checksummed to prevent silent data corruption, and the data is self-healing in replicated (mirrored or RAID) configurations, which is kinda neat.
So I decided to test drive the new x86 build; unfortunately, the Solaris installer is not for the weak at heart : the installer hardly doesn't upgrade, doesn't contain ZFS support, which is really sad, so your filesystems are created as UFS . I believe it is still impossible to put your root partition on ZFS, too, so I guess we're still stuck with UFS.
I installed the build in a VMware container, which makes the installer friggin' slow (it took over 7 hours to install), and I had to scrape my 256 MB RAM based notebook for all available memory : if I gave the VMware guest too much memory, it got terminated by the Linux OOM killer. Giving it 200 Meg RAM went fine, but then you're stuck with the textual console based installer. In short : use a machine with a minimum of 512 Meg if you're planning to install this in VMware. I'll be downloading the sparc build in the near future, to see how this behaves when installing it onto my Enterprise 3000 server.
Luckily, you don't need a JBOD or a million dollar RAID5 storage system to play around with ZFS : ZFS has the ability to use files as virtual devices! Instead of using a real disk, you can instead create files of 128MB or larger and use them just like a disk. This allows for debugging, testing, and experimentation with complex pool setups without having to require immense resources. Obviously this is gonna be slow. You've got ZFS on top of UFS, so, don't expect it to be speedy. But the point here isn't performance, its about being able to experiment, play, and learn with ZFS configurations that otherwise be impracticle if not impossible. As an example :
That the builds are *test* builds I had to discover unfortunately : I bumped into this bug which refuses to boot the kernel :
So I decided to test drive the new x86 build; unfortunately, the Solaris installer is not for the weak at heart : the installer hardly doesn't upgrade, doesn't contain ZFS support, which is really sad, so your filesystems are created as UFS . I believe it is still impossible to put your root partition on ZFS, too, so I guess we're still stuck with UFS.
I installed the build in a VMware container, which makes the installer friggin' slow (it took over 7 hours to install), and I had to scrape my 256 MB RAM based notebook for all available memory : if I gave the VMware guest too much memory, it got terminated by the Linux OOM killer. Giving it 200 Meg RAM went fine, but then you're stuck with the textual console based installer. In short : use a machine with a minimum of 512 Meg if you're planning to install this in VMware. I'll be downloading the sparc build in the near future, to see how this behaves when installing it onto my Enterprise 3000 server.
Luckily, you don't need a JBOD or a million dollar RAID5 storage system to play around with ZFS : ZFS has the ability to use files as virtual devices! Instead of using a real disk, you can instead create files of 128MB or larger and use them just like a disk. This allows for debugging, testing, and experimentation with complex pool setups without having to require immense resources. Obviously this is gonna be slow. You've got ZFS on top of UFS, so, don't expect it to be speedy. But the point here isn't performance, its about being able to experiment, play, and learn with ZFS configurations that otherwise be impracticle if not impossible. As an example :
root@harad ~$ mkdir /vdev
root@harad ~$ mkfile 128m /vdev/disk1
root@harad ~$ mkfile 128m /vdev/disk2
root@harad ~$ mkfile 128m /vdev/disk3
root@harad ~$ zpool status
no pools available
root@harad ~$ zpool create oasis raidz /vdev/disk1
/vdev/disk2 /vdev/disk3
root@harad ~$ zpool status
pool: oasis
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
oasis ONLINE 0 0 0
raidz ONLINE 0 0 0
/vdev/disk1 ONLINE 0 0 0
/vdev/disk2 ONLINE 0 0 0
/vdev/disk3 ONLINE 0 0 0
That the builds are *test* builds I had to discover unfortunately : I bumped into this bug which refuses to boot the kernel :
Reading beyond end of ramdisk
start=0x2000 size=0x2000
failed to read superblock
panic : can't mount boot archive