You are here

Sun

Image: 

The Solaris universe

Sun

Stoyan Angelov has build up an impressive Solaris documentation site with links to Opensolaris, ZFS, SMF, name it, all the Solaris goodies are included.

FireEngine

Sun

One of the features of Solaris 10 which is no less important than DTrace, Zones, and SMF is the re-write and major speed-up of the network stack. Internally the project is called FireEngine, and BigAdmin carries an interesting round-up of this new stack.

OpenSolaris BrandZ is a framework that extends the Solaris Zones infrastructure to create Branded Zones, which are zones that contain non-native operating environments. Nils Nieuwejaar has a blog post where he installs a Debian zone with BrandZ.

Patch Check Advanced

Sun

Sun has offered various tools in the past to analyze Sun/Solaris systems for patches which are installed or missing, e.g. PatchDiag, PatchCheck, PatchPro, smpatch (see the Sun Patch Portal for details). Some of them are not actively maintained, some are huge and opaque, some don't run on older Solaris releases or stripped-down machines.

Patch Check Advanced (PCA) is a perl script which generates lists of installed and uninstalled patches for Sun/Solaris systems and optionally downloads and installs patches.

Uptime

Sun


# w
2:51pm up 852 day(s), 18:35, 2 users, load average: 0.77, 0.61, 0.33

# init 0

I guess this is a record. I was afraid the server would reboot with fsck errors, but it came back online without a glitch. This is a Netra server running Solaris 7. My heart bleeds every time I must do this, but patching is a necessary thing...

ZFS

Sun

Since mid november, Sun released Nevada build 27, which contained the source code of the anticipated ZFS file system. ZFS is IMO a radically new and revolutionary filesystem which completely eliminates the concept of volumes and the associated problems of partitions, all operations are copy-on-write transactions, so the on-disk state is always valid. There is no need to fsck a ZFS filesystem ever. Every block is checksummed to prevent silent data corruption, and the data is self-healing in replicated (mirrored or RAID) configurations, which is kinda neat.

So I decided to test drive the new x86 build; unfortunately, the Solaris installer is not for the weak at heart : the installer hardly doesn't upgrade, doesn't contain ZFS support, which is really sad, so your filesystems are created as UFS . I believe it is still impossible to put your root partition on ZFS, too, so I guess we're still stuck with UFS.

I installed the build in a VMware container, which makes the installer friggin' slow (it took over 7 hours to install), and I had to scrape my 256 MB RAM based notebook for all available memory : if I gave the VMware guest too much memory, it got terminated by the Linux OOM killer. Giving it 200 Meg RAM went fine, but then you're stuck with the textual console based installer. In short : use a machine with a minimum of 512 Meg if you're planning to install this in VMware. I'll be downloading the sparc build in the near future, to see how this behaves when installing it onto my Enterprise 3000 server.

Luckily, you don't need a JBOD or a million dollar RAID5 storage system to play around with ZFS : ZFS has the ability to use files as virtual devices! Instead of using a real disk, you can instead create files of 128MB or larger and use them just like a disk. This allows for debugging, testing, and experimentation with complex pool setups without having to require immense resources. Obviously this is gonna be slow. You've got ZFS on top of UFS, so, don't expect it to be speedy. But the point here isn't performance, its about being able to experiment, play, and learn with ZFS configurations that otherwise be impracticle if not impossible. As an example :

root@harad ~$ mkdir /vdev
root@harad ~$ mkfile 128m /vdev/disk1
root@harad ~$ mkfile 128m /vdev/disk2
root@harad ~$ mkfile 128m /vdev/disk3

root@harad ~$ zpool status
no pools available
root@harad ~$ zpool create oasis raidz /vdev/disk1 
 /vdev/disk2 /vdev/disk3
root@harad ~$ zpool status
  pool: oasis
 state: ONLINE
 scrub: none requested
config:

        NAME             STATE     READ WRITE CKSUM
        oasis            ONLINE       0     0     0
          raidz          ONLINE       0     0     0
            /vdev/disk1  ONLINE       0     0     0
            /vdev/disk2  ONLINE       0     0     0
            /vdev/disk3  ONLINE       0     0     0

That the builds are *test* builds I had to discover unfortunately : I bumped into this bug which refuses to boot the kernel :

Reading beyond end of ramdisk
start=0x2000 size=0x2000
failed to read superblock
panic : can't mount boot archive

Pages

Subscribe to RSS - Sun