Configuring Open Solaris 2008.05

Posted on November 12, 2008. Filed under: Open Solaris |

This is my first blog item on this blog ever, and it’s not about Groovy, nor Grails, nor Java. This one is about installing OpenSolaris 2008.05, and some of my experiences while doing so.

First a small disclaimer: my current OpenSolaris knowledge is limited, and I have only started playing with it for around 20 hours or so.

My goal is to build a media server, which should serve my Squeezebox. For that, I’ve bought some hardware, including:

  • 4 x 750gb Samsung HD’s
  • ASUS Stuff
  • 2 GB ram
  • etc..

Which should result into a Network Attached File Storage with 2 TB of HD. This is accomplished by using RAID Z on a ZFS filesystem, but more about that later. I thought about using Ubuntu + LVM + RAID 1, but I heard that it promotes datacorruption, since it doesn’t do checksum’s like ZFS does, so I skipped that idea and went straight to OpenSolaris. Well, almost straight anyway: I first tried Solaris 10, but since that one didn’t boot (Image doesn’t fit memory error or something) I decided to go for OpenSolaris. Never worked with it, and a nice learning experience, so I thought.

ZFS
What I first wanted to do, is to format all the HD’s, partition them, mount them, and put them into a RAID something configuration. Well, it turned out I was quite wrong here.
1) Partitioning is something which is apperantly not done in Solaris
2) Formatting IDE disks is something from the past?
3) Mounting them….well, I don’t think so!

What I had to do was: type format, which resulted in this dialog:

-bash-3.2# format
Searching for disks…done

AVAILABLE DISK SELECTIONS:
0. c4d0
/pci@0,0/pci-ide@1f,2/ide@0/cmdk@0,0
1. c4d1
/pci@0,0/pci-ide@1f,2/ide@0/cmdk@1,0
2. c5d0
/pci@0,0/pci-ide@1f,2/ide@1/cmdk@0,0
3. c5d1
/pci@0,0/pci-ide@1f,2/ide@1/cmdk@1,0
4. c6d0
/pci@0,0/pci-ide@1f,5/ide@0/cmdk@0,0

and press CTRL+C after that. I needed that for a listing of all the available disks, which are identified after the number with the dot. In my case, c4d0, c4d1, c5d0, c5d1 and c6d0 are my disks, where the last one is my boot disk.

To make this a RAID Z ‘cluster’, I only had to type this:

zpool create tank raidz c4d0 c4d1 c5d0 c5d1

(Thanks rskelton!!)

After that was done, I needed some quotas. So I created some filesystems:

zfs create tank/media
zfs create tank/applications

And the filesystems were created. Easy as that! Next, the quota could be set, since I don’t want my media to overrule my running applications. This could be done with the following command:

zfs set quota=1.95T tank/media

and can easily be checked with the following command:

-bash-3.2# zfs get quota tank/media
NAME PROPERTY VALUE SOURCE
tank/media quota 1.95T local

The next thing I have to do, is to install some applications on it, but first disable the GDM/X graphical login window. This can be done like this:

svcadm disable gdm
Advertisements

Make a Comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Liked it here?
Why not try sites on the blogroll...

%d bloggers like this: