Page 1 of 1

Good write up: Super scale out for flash

Posted: Wed Apr 22, 2015 3:38 pm
by Sabre
The Reg
Scale-out flash arrays sound excessive but they are really not. After all, we can understand scale-out filers, adding node after node to store rapidly growing file populations.

Use cheap and deep disk for the data, with flash stashes used to hold the metadata and locate files fast. When the files are large then sequential access to and from disk is pretty fast as well.

But won't scale-out flash filers be monstrously expensive? Overkill, surely? Let’s have a look.

Scale out is less expensive than scale up. Instead of having a single, multiple controller head and complex backbone network fabric, as in monolithic DS8000/VSP/VMAX-style arrays, a scale out generally employs multiple independent nodes organised into a cluster which operates as a single system.

It means that when you get the array you don’t have to estimate how big it is going to become and buy all that capacity up front. For example, you might need 100TB now, and 120TB next year, 150TB the year after, 180TB after that and 200TB in year five – a total of 750TB.

Buy all that in year one and you have lots of capacity sitting idle. With scale out you can buy chunks of storage that are better tuned to how much you need when you need it.

...
Good stuffs :)

Re: Good write up: Super scale out for flash

Posted: Mon May 11, 2015 2:07 pm
by complacent
that is a good write up. there's a lot of innovation surrounding these issues now. i caught a presentation not to long ago from these guys - coho data and was impressed with their take on scale.

like you said, good stuffs. :)

Re: Good write up: Super scale out for flash

Posted: Tue May 12, 2015 5:36 pm
by Sabre
That's pretty interesting stuff! There servers look like rebranded SuperMicro's, not that that's a bad thing. I've looked at the OpenFlow before, but have yet to play with anything that implements it.