No matter how you go about it, getting these drives set up to be reliable isn’t going to be cheap. If you want to run without an enclosure, at the very least (and assuming you are running Linux) you are going to want something like LSI SAS cards with external ports, preferably a 4-port card (around $50-$100, each port will run four drives) that you can flash into IT mode. You will need matching splitter cables (3x $25 each). And most importantly you need a VERY solid power supply, preferably something with redundancy (probably $100 or more). These prices are based on used hardware from ebay, except for the cables, and you’ll have to do some considerable research to learn how to flash the SAS cards, and which ones can be flashed.
Of course this is very bare-bones, you won’t have a case to mount the drives in, and splitter cables from the power supply can be finicky, but with time and experience it can be made to work very well. My current NAS is capable of handling up to 32 external and 8 internal drives and I’m using 3D-printed drive cages with some cheap SATA2 backplanes to finally get a rock-solid setup. It takes a lot of work and experience to do things cheaply.
What do you consider a fair amount? My current server has 64GB of ram but arc_summary says ZFS is only using 6.35GB on a system with three ZFS pools totaling over 105TB of storage under pretty much constant usage.
No matter how you go about it, getting these drives set up to be reliable isn’t going to be cheap. If you want to run without an enclosure, at the very least (and assuming you are running Linux) you are going to want something like LSI SAS cards with external ports, preferably a 4-port card (around $50-$100, each port will run four drives) that you can flash into IT mode. You will need matching splitter cables (3x $25 each). And most importantly you need a VERY solid power supply, preferably something with redundancy (probably $100 or more). These prices are based on used hardware from ebay, except for the cables, and you’ll have to do some considerable research to learn how to flash the SAS cards, and which ones can be flashed.
Of course this is very bare-bones, you won’t have a case to mount the drives in, and splitter cables from the power supply can be finicky, but with time and experience it can be made to work very well. My current NAS is capable of handling up to 32 external and 8 internal drives and I’m using 3D-printed drive cages with some cheap SATA2 backplanes to finally get a rock-solid setup. It takes a lot of work and experience to do things cheaply.
Why immediate jump to IT mode? Sure ZFS is great but running ZFS takes a decent chunk of RAM for cache.
What do you consider a fair amount? My current server has 64GB of ram but arc_summary says ZFS is only using 6.35GB on a system with three ZFS pools totaling over 105TB of storage under pretty much constant usage.