Knowledgebase

Thoughts on hardware for Cloud

Posted by ewitte, 09-17-2010, 09:32 AM
Still changing stats and have at least 6 months but this is the bulk of what I have in my head now. Just from a hardware standpoint not the logistics. OnApp SBE-714D-R42 - Supermicro SuperBlade with 4 power supplies Hypervisor Servers (undecided quantity) SBI-7126T-T1E 1 E5620 each (can upgrade to two each) / 16-24GB RAM each to begin with Datastore 1 Supermicro 216 - 12 * SSD RAID6 ADG(hopefully eMLC) Datastore 2 Supermicro 216 - 12 * SSD RAID6 ADG(hopefully eMLC) High Availability between DS1 and DS2 - Will add more space as funds become available. Controllers on Datastores LSI 9261-8i with the battery Mellanox 40Gbps connection to Hypervisors (picked that specific blade because it comes with 40Gbps)

Posted by HostColor, 09-17-2010, 09:49 AM
We use Supermicro for our Cloud infrastructure and we are quite happy about their hardware. It is stable and the Superblade technology provides redundancy on a hardware level (cluster interconnect,etc...).

Posted by Winky, 09-17-2010, 08:56 PM
SSDs are gonna be blazing fast! you will not be disappointed.

Posted by brentpresley, 09-19-2010, 09:48 PM
Every report, and test, we have seen with SSDs in a server environment has shown them to not be reliable. Even the best of the breed, Intel X25-E, simply doesn't have enough read-write cycles to stand up to datacenter use for longer than 3-6 months.

Posted by vselvara, 09-20-2010, 12:16 AM
eMLC SSDs are supposed to be much more reliable. It is very new so we'll know for sure in a year.

Posted by lostmind, 09-20-2010, 12:23 AM
Plenty of us using ssd's in heavy write scenarios for much longer than 3-6 months with no problems.

Posted by brentpresley, 09-20-2010, 08:53 AM
Please link me some reviews in this environment. We discussed this with a couple of cloud software development companies and their testing had very high failure rates.

Posted by kris1351, 09-20-2010, 12:08 PM
I have read reports of heat being the number one issue of the SSDs.

Posted by JasonD10, 09-20-2010, 05:07 PM
Will have these data points soon. We're doing a nice install with a lot of these...

Posted by brentpresley, 09-20-2010, 05:26 PM
They can't possibly cost-effective, even if they are reliable. For every 16 cores and 16-32GB RAM we sell, we need 1-1.5TB of redundant space (2-3TB total). That would cost us a crap-ton, pardon the expression, with SSDs. Although I could see a few high-dollar customers paying for upgrades like that.

Posted by JasonD10, 09-20-2010, 05:45 PM
For high-end DB's, they aren't all that bad.

Posted by eming, 09-20-2010, 07:43 PM
we've actually stopped suggesting them for SAN usage, and if anything you should not go with any sort of mlc's (MLC: 5000 write cycles; eMLC: 30000 write cycles; SLC: 100000 write cycles; eSLC: 300000 write cycles) - stick with SLC's. They are good for non-critical raided SAN acceleration though, like MaxIQ etc. We've got great experience with the FusionIO's though - they are fantastic.

Posted by brentpresley, 09-20-2010, 07:56 PM
They better be. At the price they ask they are worth more per oz and solid gold.

Posted by MikeTrike, 09-20-2010, 08:20 PM
Except gold tends to keep its value longer than technology.

Posted by brentpresley, 09-20-2010, 08:22 PM
Exactly.

Posted by gone-afk, 09-20-2010, 08:28 PM
Your cost per GB with SSDs is simply going to be too high for general VPS pricing. You most likely want to build your storage servers with a decent RAID 10 controller, and a mass of 1TB RE3 drives (smaller or bigger depending on your storage needs).

Posted by ewitte, 09-23-2010, 09:07 PM
Will have to read up on high usage on SSD. Cost is fine except for SLC another reason I was hoping eMLC would change my numbers greatly. Using RAID6 versus 10 helps greatly and even with the hit its still MUCH faster than mechanical drives. Certainly the specs show it *should* work much better than normal drives. I'm just worried about supporting a high customer base off slow iops SAS drives. Although with a good controller these results were actually off of 3 15k sas drives in RAID0 (just testing). http://www.vmcloudhost.net/wp-conten...9/ibtests2.jpg 4k iops was over 10k (because of the controller cache) so maybe 12 of these drives wouldn't be all that bad for a 100 user base. Last edited by ewitte; 09-23-2010 at 09:14 PM.

Posted by nix101, 09-23-2010, 09:25 PM
Go for SAS, SSDs aren't cost effective and overkill unless you have a client who need is so demanding.

Posted by ewitte, 09-23-2010, 09:26 PM
More results off the SAS drives. After I get SRP working might change my mind about using SSD. This is still only 3 drives in RAID0 with a caching controller. http://www.vmcloudhost.net/wp-conten...9/ibtests3.jpg

Posted by nix101, 09-24-2010, 04:06 AM
What controller you are using for these?

Posted by eming, 09-24-2010, 04:18 AM
did you guys ever look at RAM-SAN? D

Posted by ewitte, 09-24-2010, 06:43 AM
RAM SAN is way too expensive. Thats just off the 256MB P400 on the DL320s. I may ore may not use it in the end. I would prefer two new identical boxes for the cluster.

Posted by NoSupportLinuxHostin, 09-24-2010, 01:32 PM
I have worked with Supermicro SuperBlade solutions with Mellanox InfiniBand hardware installed in them to deploy cloud solutions. I really like the Supermicro SuperBlade solutions overall. My one complaint is that the InfiniBand solutions are very poorly supported (if at all). Basically you can expect both Supermicro and Mellanox to tell you to figure it out yourself when you run into any issues with InfiniBand.

Posted by ewitte, 09-24-2010, 03:02 PM
It certainly has been fun (j/k) figuring it out so far. Lost like 10 hours Sunday in the blink of an eye. Really though its not that much different than other platforms. Probably 9/10 times I call for support I stump them anyway.



Was this answer helpful?

Add to Favourites Add to Favourites

Print this Article Print this Article

Also Read
server rooted twice ! (Views: 614)


Language:

Client Login

Email

Password

Remember Me

Search