Knowledgebase

Provider with best Read/Write setup

Posted by tumble, 10-14-2010, 06:36 PM
At this point in time I run and operate 3 servers. Two servers for customers and one server for support/billing and backups. What’s important to me is read/write operations. I have some very busy forums that I host and I cannot degrade the performance for this customers. However from my looking around it does seem a few providers have a strong believe that a well created cloud will have no issues with read/write. I am just looking into consolidating all my servers into some sort of cloud computing. I believe sometime in the future I would like to start to offer VPS as a service also. My current specs viewed as single operating system. 16 cores 24Gig Ram Raid10 8 sas drives usable storage 500G Sata 1 drive usable storage 500G 2T bandwidth (tier 1 only) Cost per month=532 USD Looking thru providers. I have talked with: Storm on demand -> Bare Metal Servers? (not even sure why I was directed towards that app) Rackspace -> Cloud->Vmare (just finished talking with them via phone) CloudWeb ->Cloud-> OnAPP (waitng for call back) Softlayer -> Cloud or Bare Metal According to softlyer chat agent “Both public and private use a backend SAN solution for storage. This is what gives it the built in reduandcy and automated failover. Because of this read and write speeds are not as fast because the storage is not local.” So really is Cloud any good for Read/Write operations. And If so what is the best Provider for read/write type applications? Thanks for for your thoughts and input.

Posted by Winky, 10-14-2010, 07:45 PM
The fastest read/writes can be achieved using a Fibre Channel SAN with SSD drives in a RAID10 configuration. I know a provider that can deliver this for you. Would you mind if I PM'd you the name of the provider?

Posted by JasonD10, 10-14-2010, 07:55 PM
Hello, To put it simply, Cloud is a software implementation of existing technologies. The reason I say that is to show you that Cloud would not be at fault for poor IO in a given application, but simply a providers choice at their own implementation and offering of their Cloud services. A Cloud can be built using any type or grade of hardware, which will have a varying degree of IO. So, if any system (Cloud or not) is using remote SANs you're going to have latency as well as other potential limitations or issues. Some SANs will do just fine, but it's always important to note how that particular provider is built. In your case, you have stated that IO is a primary concern for your needs. Cost is also going to come into play as when a provider is using SANs (and they will need to be redundant, high available SANs) it's more than likely going to become more costly to provide you with more data than a Cloud infrastructure that is built on local storage. Not always though, just usually. So ask your potential providers about their data infrastructure. If they're using SANs, it's also important to know how many servers/clients access it. It could be ultra fast one day, and then they continue to degrade in performance as they start with just two servers being connected and after a few months there's 20 using the same set of SANs, or other high IO customers. From what I've seen in both neutral reviews from outside sources and my own research a Cloud built using local storage (local in the sense of drives on each server in the Cloud, not using external SANs but instead creating an IP SAN) has given the best results. But again, this also depends on the hardware used. You can build that out of ATA, SATA, SAS, SSD, etc just as a SAN. So local isn't always better than a SAN either. It's all about the details. PS. We don't use OnApp, but do use AppLogic as one of our primary software platforms.

Posted by tumble, 10-14-2010, 09:36 PM
I stand corrected CloudWeb does use AppLogic. And really it makes sense about the cloud being built woth local storage vs outside storage. As is stated by softlayer they directed me away form cloud to dedicated server. Rackspace felt that there set up of external SAN's would have no issues whatsover with read/writes. Storm in demand directed me to bare bone servers. And really I do not have the faintest idea how those corerespond to a cloud. @winky sure shoot me a link @Cloud Web (who uses AppLogic LOL) you are pretty high on my list. Frank

Posted by brentpresley, 10-14-2010, 09:51 PM
Applogic is one of the oldest and most robust of all cloud infrastructure companies. Other companies, not to step on any toes, are still playing catch-up to what they have already implemented. I agree w/ CloudWeb 100% that your disk I/O is only limited by your SAN implementation, not by the cloud software itself.

Posted by SolarVPS, 10-15-2010, 12:00 AM
tumble, You received bad information about a SAN being slower than local storage. A properly built SAN is going to blow local storage away in terms of disk I/O. In addition, it's not just how the SAN is built but how the entire network infrastructure was designed. Some questions you should ask are: 1. Do you use multipathing on your front-end hypervisors? 2. Do you use SSD caching on your SAN? If so, do you have both READ and WRITE SSD caching? 3. Do you use SAS or SATA drives in your SAN? 4. How fast is your network bandwidth pipe between your hypervisors and your SAN? Of course there are a lot more questions that you can and should ask, however, these are a few that all lend to how fast the disk I/O will be in your Virtual Machine. Their answers should be: 1) Yes 2) Yes and Yes 3) SAS and 4) at least 4 Gigs. Good luck! -Ross

Posted by nix101, 10-15-2010, 02:22 AM
I would completely agree with SolarVPS, Especially SSD caching will surely improve your SAN performance. SSD caching will also lower your overall SAN cost.

Posted by ewitte, 10-15-2010, 07:08 AM
Interesting need to look into caching with SSD. I've been trying to do the whole array and it has been getting crazy expensive. Although not as much or even 1/10th as the out of the box solutions! Its going to be hard for budget solutions because its either exteremely costly or takes an extreme amount of technical knowledge. I have been in the business for 15 years and am the Senior network engineer for on of Houston's top consulting firms and it is still a lot of work for me to figure out.

Posted by nix101, 10-15-2010, 07:57 AM
http://www.adaptec.com/en-US/product...e-Performance/ This should give you an idea about SSD cache. Even with SATA hdds you can get great performance.

Posted by ewitte, 10-15-2010, 09:17 AM
Do you think 20 500GB constellation drives in RAID10 and this setup with 4 32GB SLC drives would give good performance for up to 100 users?

Posted by cristibighea, 10-15-2010, 09:24 AM
I really doubt he needs that much space, and unless he's actually using a good chunk of the space provided by those 20 drives, he won't be seeing any performance benefit. Perhaps some optimization on the software side of things might do wonders.

Posted by KansasHosting, 10-15-2010, 09:58 AM
I am giving infiniband/zfs a try for san, I was planning it out and found a guy doing almost the exact same with great success - zfsbuild.com

Posted by FHDave, 10-15-2010, 10:01 AM
Why would Fibre Channel give any advantage of copper iSCSI? There is know 10 Gbps copper iSCSI per port, whereas Fibre Channel is only limited to 4 Gbps. Not to mention, Fibre Channel is much harder and more expensive to implement. It seems like technology for the past.

Posted by KansasHosting, 10-15-2010, 10:36 AM
Latency, Protocol Overhead

Posted by ewitte, 10-15-2010, 10:51 AM
I was looking at infiniband but I see something interesting from LSI. The SAS6160 SAS storage switch. "24Gb/s SAS connections, aggregate bandwidth of 384Gb/s"

Posted by SolarVPS, 10-15-2010, 11:26 AM
No, that's not really the preferred way to build it. Check out Nexenta and ZFS. A good number of spindles is nine so look into a SAS 1TB drive. SLC drives are for write caching and its perfectly fine to use MLC for read caching. No point in wasting your money. If you are using ZFS, you dont need to waste your money on expensive RAID cards. A good HBA is perfectly fine. -Ross

Posted by SolarVPS, 10-15-2010, 11:28 AM
Infiniband is expensive and difficult to setup and manage. You really don't need it. A good stackable gigabit switch architecture and multipath i/o will give you excellent performance. -Ross

Posted by ewitte, 10-15-2010, 11:42 AM
10Gb/20Gb infiniband is fairly cheap. Especially since the servers I was looking at included it. Yes I've had a lot of configuration issues but have 75% of it nailed in the test configuration and 6 more months left. But the LSI switch looks interesting. Preferred and optimal are two completely different things I thought about doing something like netapp over cheap 1Gbe. While more servers increases the bandwidth they also all connect back to the hypervisors at 1Gb. Last edited by ewitte; 10-15-2010 at 11:47 AM.

Posted by jayglate, 10-17-2010, 01:07 AM
Infiband is great is wonderful actually, however 20 500GB Const ES drives in a RAID10 will only give you a rough max IOPS of around 1500. You can easily cap the IOPS out in about 300MB/s of read throughput. Which is about 3Gbps a far cry from a 10/20Gb inifiband max. Really really think about what you need and how much traffic you need.



Was this answer helpful?

Add to Favourites Add to Favourites

Print this Article Print this Article

Also Read
About openswan (Views: 726)


Language:

Client Login

Email

Password

Remember Me

Search