Skip to main content

Multi-terabyte filesystems on Solaris

At our shop, we still use UFS as the Solaris' filesystem standard. I was puzzled when a colleague came to ask if he could use ZFS on his Solaris 10 server, because he needed a multi-terabyte filesystem, and UFS supported this, but in an awkward manner. Now, personally, I don't favor filesystems this big, because the backup alone needs special care, but sometimes some applications force you to do weird things. But I had a hard time believing that Solaris 10 couldn't support this. Seems my colleague is right :


In Solaris 10, yo need to add the -T option to newfs when trying to format a +1TB filesystem :

newfs -T /dev/rdsk/c0t1d0s1

This also will imply a hardcoded inode density, whereas each inode is 1MB big (nbpi reset to default 1048576). Which means your UFS filesystem will also only support 1 million files on your TB filesystem, which is *low*. The reason for this parameter is because a higher inode density on +TB filesystems could imply fsck times up to days. A corrupt filesystem is now a reason to invoke full disaster recovery.


AFAIK, the only possibility to change your inode settings on UFS filesystems this big, is to take the mkfs source code from OpenSolaris, change the nbpi parameter and compile it yourself.
Or change to ZFS or Veritas filesystems.