On Wed, Nov 19, 2008 at 02:37:05PM -0500, John Almberg wrote:
>>> This machine has an Intel motherboard and a hardware raid controller.
>>> From what I can tell, there is some Intel software installed on the
>>> machine that makes hardware faults visible to snmp.
>> That would require Net-SNMP to be linked to that software (or library)
>> directly. Two things can't just "magically talk" to one another. :-)
> As I said, I really have no idea.
> Now that I'm reading more deeply in the notes... the monitoring was
> supposed to be with IPMI. No idea what that is, either, but I thought
> I'd toss it into the mix.
Ah, IPMI... it's another one of those technologies which is a great
idea, but often horribly implemented. The most common use is for remote
management (serial-over-IP, or even KVM-over-IP), access to hardware
sensors (fans, temps, voltages), and for some other monitoring-related
things. It's very useful -- when it works. :-)
On Intel boards (native Intel IPMI) it might be great. There's been a
lot of problem reports with Supermicro's IPMI, and most are IPMI card
>> I just hope the card is an actual RAID card and not BIOS-level RAID
>> Intel MatrixRAID. If it is MatrixRAID, I highly recommend you back
>> entire machine up and reinstall without MatrixRAID, otherwise when you
>> lose a disk or need to rebuild your array, you'll find your array
>> broken/gone, be completely unable to rebuild it, or kernel panics.
>> that all of this stuff works just fine on Linux; the issues listed are
>> with FreeBSD.
>> Generally speaking, we (the open-source world) have gotten to the
>> with OS-based software RAID (e.g. Linux LVM, FreeBSD ccd/gvinum/ZFS,
>> OpenSolaris ZFS) where it offers significant advantages over hardware
>> RAID. There are good reasons to use hardware RAID, but in those
>> scenarios admins should be looking at buying an actual filer, e.g.
>> Network Appliance. Otherwise, for "simple" systems (even stuff like
>> 2U or 3U boxes with many disks, e.g. a "low-cost filer"), stick with
>> some form of OS-based software RAID if possible.
> That's good to know. I was told just the opposite by the guy selling the
> $650 RAID cards. Who'd have thunk?
Well, hardware RAID has a specific purpose. I like them for the fact
that they add a layer of abstraction in front of the OS; that is to say,
some of them are bootable even with RAID-5. FreeBSD's bootloader has a
lot of difficulty booting off of different things, so adding a layer of
abstraction in front is useful.
For example, take into consideration that you can't get kernel panic
dumps (to disk) using gmirror without a bunch of rigmarole. I forget
which GEOM method it is, but one of them you can't boot off of easily.
gvinum? geli? I can't remember. There's one or two that the
bootstraps don't work with. Hardware RAID can help solve that.
> The card in the box is a
> Intel 18E PCI-Express x8 SAS/SATA2 Hardware ROMB RAID with 128MB Memory
> Module and 72 Hour Battery Backup Cache
> $625 as shown on the packing list, so I hope it's a good one.
Ah, I think it's hardware RAID, and PCIe to boot. Yes, I would
recommend keeping that! What does it show up as under FreeBSD? I'm
curious what driver it uses, and what your disks show up as (daX or adX;
| Jeremy Chadwick jdc at parodius.com |
| Parodius Networking http://www.parodius.com/ |
| UNIX Systems Administrator Mountain View, CA, USA |
| Making life hard for others since 1977. PGP: 4BD6C0CB |