Solaris iSCSI SAN

When temporary areas are shared with log volumes, the storage volumes are allocated based on the available free space in each pool.


Our focus is on constructing an
that can fulfill both virtualization and storage requirements. During a previous
prior job
project, I employed opensolaris (when it was still SUN) to establish an iSCSI SAN. To transmit data to a disaster recovery site, I made use of snapshot send/recv. This configuration has been functioning efficiently and hasn’t demanded much maintenance.

As our company expands, we have reached our maximum capacity and need to decide between acquiring a SAN or constructing one ourselves. Currently, we require three separate nodes- one for our internal site and two for our DR locations.

We have limited our choices to two options after exploring the netapp, equallogic, and HP MSA series. The cost of these options does not seem reasonable for the features they offer.

The first option is to buy an Oracle hardware (such as a 7410) for every location and make use of the included replication software (which might be AVS) to copy data between sites. The hardware has a capacity of 12TB RAW, but with the implementation of raidz across the volumes, we might end up with only 4TB of usable space. This could pose a problem as we might go back to square one in terms of disk space limitations, despite having redundancy. To increase storage capacity, we may need to add more shelves, but this would mean being tied to Oracle’s hardware.

The second option involves using Dell hardware, specifically the r710s and a dell md1200 arrays at each location. To obtain Solaris, one must purchase it through Oracle, costing $1,000 per processor and $3k in total. The concern lies with the replication aspect, as the previous experience with SUN AVS was positive when it was an opensource software. Another option was snapshot send/recv, which was inefficient. The availability and cost of AVS remains uncertain. Although it can be downloaded from Oracle’s website, it is chargeable per TB transmitted. Does anyone have any information on the cost of this product?

Can the idea presented be validated with a sanity check? It would be valuable to hear from the community and learn from those who have taken this approach before, particularly if there are any potential challenges to be aware of when implementing on a larger scale.

After exploring and conducting extensive online research, I have been unable to locate any information regarding the replication aspect. I appreciate any assistance in this matter.

Solution 1:

It seems that we frequently address similar inquiries on Server Fault.

NexentaStor’s commercial offering enables you to accomplish all of these tasks through either asynchronous replication (by means of
ZFS Send
/Receive over ssh/netcat or rsync) or synchronous replication (using another commercial plugin).

Custom hardware packaged and certified by vendors can be found in various setups. One such example is the Storage Director series offered by Pogo Linux.

I used to run OpenSolaris on my personal Sun x4540, but now it runs Nexenta Enterprise. However, the Sun hardware has become unreliable after only two years, so I find building my own storage nodes more valuable. Currently, I am using HP hardware to build single storage nodes, but I can also expand to external storage with LSI enclosures. SuperMicro-based solutions have been documented by others as well. In the HP systems, I replace the Smart Array controllers with LSI 6GB SAS controllers; 9211-8i for internal use and 9200 for external. Similar steps need to be taken in the Dell case.

Solution 2:

Among the different proprietary options available, the Oracle ZFS appliances offer the least amount of locking. In case of need, you can easily switch to the Solaris 5.11 command line and perform send/recv operations. Although I am not sure about the 12TB RAW, the 7410/7420 has a head node (clustering may have two) and can accommodate up to 24-disk shelves. Our current servers have 7 shelves, but I have heard that it can be increased to 12-14 shelves. It does not use AVS, but that should not be a concern. Additionally, it has excellent iSCSI support and provides scheduled and/or continuous remote replication.

Consider the Oracle
Sun Cluster
Geographic Edition as AVS is now defunct. However, it may not provide the same features as AVS did. Oracle’s primary focus is selling storage hardware, hence they may not want individuals to create their own solutions (such as the 7000s appliances) using their software (Solaris). It is unlikely that they would make the same mistake as Sun did and undermine their own business.

Our Exchange,
VMware, Xen
, and Linux servers with NFS and iSCSI are being supported by some 7410s that have remote replication, which is proving to be dependable. However, we advise caution when dealing with OpenSolaris code pre-COMSTAR enhancements, as our testing revealed significantly low IOPS.

Frequently Asked Questions

Posted in Uncategorized