cancel
Showing results for 
Search instead for 
Did you mean: 
Trevor
Commander Commander
Commander
  • 350 Views

Disk RAID Array Usage

Anyone out here still using RAID array disk configurations?

Trevor "Red Hat Evangelist" Chandler
10 Replies
TM
Starfighter Starfighter
Starfighter
  • 311 Views

Hi @Trevor 

That is very long time that I used RAID (with the mdadm command).

Nowadays the RAID controllers on servers do a better job, off loading some duty from the main processors, enhancing IO by caching, and even providing some recovery capacity with dedicated battteries.

Tshimanga

Trevor
Commander Commander
Commander
  • 303 Views

Thanks TM for your response!

Trevor "Red Hat Evangelist" Chandler
Chetan_Tiwary_
Community Manager
Community Manager
  • 260 Views

@Trevor worked on it long back for a client - do you have any particular concern or query ?

Trevor
Commander Commander
Commander
  • 256 Views

Chetan, I'm just wondering if I should begin to phase it out as a storage solution, and instead place my focus on 20th century stuff like software-defined storage?  I know once upon a time, RAID was the "golden child".
It hasn't been put out to pasture just yet, however, I'm just one of those persons who wants to ensure that I'm being engaged more with the future, than the past.

Trevor "Red Hat Evangelist" Chandler
Travis
Moderator
Moderator
  • 251 Views

@Trevor -

It really depends on your use case. I extensively used RAID at previous positions with high-performance hardware RAID adapters with multiple disks. Leveraging those, I specifically tuned the filesystem to the RAID devices based on disk size, number of devices, bus, etc. to have the best stripe sizes, etc. for the best read/write speeds including having XFS logs on a separate logging disk with multiple iterations of testing.

Personally, I'm currently using software RAID at home on my NAS devices to get both speed, performance, and redundancy. 

Software defined storage can be a bit of a complicated concept as technically MDADM and software-based RAID implementations could be considered software defined storage. I'm assuming you are referring more towards the network-based software defined storage for things like CEPH and maybe the older GlusterFS. Possibly you are thinking of just object storage where things are stored as objects.

So things to think about ... how is the storage connected and what will the software defined storage be used for ...

  • are you needing a bunch of storage ???
  • do you have a high-speed network to support the speeds ?? 
  • how much storage do you need and can you recover from failures ??
  • what type of redundancy is needed?
  • what will the storage be used for?

So personally, I have a software RAID6 for my one NAS device as I need fast disks, but also reliability to recover from failures. I have a multi-gig bonded network going to the device so I can have higher-speed network fileshares This gives me a device with a lot of storage. I'm able to run containers and VMs directly on the NAS, so I can benefit largely by the use of local disks not needing to worry about the network. However, I also have a setup with the network bandwidth that can almost match the native speed of the RAID on the NAS so my various other devices in the home can leverage the large amounts of storage.

I've also got some RAID0 for my "junk" or temporary storage for building. This is for sheer speed so I have the highest speed read/write devices. So it really boils down to your use-case and how/why you want the other storage solutions.

I have also run both Ceph and Gluster as I used to maintain a 3-system RHV/oVitrt Hyperconverged cluster with Gluster. I can tell you I wasn't able to get the speed gains fully as I was limited by network connectivity. Both Ceph and Gluster will depend on disk speeds, disk configurations, and network connectivity. Bonded NICs and higher-speed networks are better for this type of storage as well as configuration of things like JUMBO FRAMES.

I will say now that I've upgraded my network a bit at home and have a few systems with 10GB NICs, I may look at doing a Ceph implementation once again as this will allow easier "cloud" storage locally for things like OpenShift and with Ceph, I can have multiple storage backends supported. 

I will say the main storage solution I no longer rely on as much is iSCSI as I've pretty well moved to other solutions.

Travis Michette, RHCA XIII
https://rhtapps.redhat.com/verify?certId=111-134-086
SENIOR TECHNICAL INSTRUCTOR / CERTIFIED INSTRUCTOR AND EXAMINER
Red Hat Certification + Training
Trevor
Commander Commander
Commander
  • 245 Views

Yeah Travis, I recall when RAID burst on the storage scene, it was all the rage.  I mean it moved right to the head of the class in terms of storage, due to all of its prowess - fault tolerance, speed, and capacity.  When I was doing my Sys Admin thing, for SunOS systems, if some variation of RAID wasn't implemented, I was going to be cited for negligence!  

In the classroom, when I got to the chapters on storage, you better believe that I was going to serve up a heavy dose of RAID - 0, 1, 5, 6, 10.  There was no escape.  This might explain, why even today, I've got a pile of drives in my garage

Well, I haven't been on the ground (i.e. in the trenches, in the practical world) for a couple of years, and I'm not close to the day-to-day like I used to be.  So, I have to reachout to the calvary (i.e. Travis Michette, Cheta Tiwary, et. al) for an update on things like this.

Thanks for coming to the rescue one more time!!!!

 

 

Trevor "Red Hat Evangelist" Chandler
0 Kudos
Chetan_Tiwary_
Community Manager
Community Manager
  • 251 Views

@Trevor I wouldnt say RAID is dead. Also , RAID is fundamental and learning it will enrich you with concepts like redundancy, mirroring, disk parity, performance scenarios/tuning and disk failure handling.  Hence it is not a waste definitely.

You are right about investing more time in SDS - Ceph, ODF and K8s storage. 

https://www.redhat.com/en/topics/data-storage/software-defined-storage 

https://www.redhat.com/en/topics/data-storage/why-choose-red-hat-storage 

Trevor
Commander Commander
Commander
  • 245 Views

Chetan, I'm right there with you, when you say that RAID isn't dead.  With the benefits that it offers, I have no doubt that it has many years of life remaining.  Not seeing it out front in learning resources, as much as I once did, prompted my inquiry.

Well, as always, you've provided a response that will allow me to put a bow on this query!

Thanks Chetan!!!!

Trevor "Red Hat Evangelist" Chandler
0 Kudos
Travis
Moderator
Moderator
  • 230 Views

@Trevor -

The reason you don't see it out front in "learning resources" is because RAID is old, boring, and no longer "sexy". Most of the learning resources are dedicated around new, shiny, and apps that have a "hype" that go around them because that is what the majority of the people look for. Unfortunately, this is the way most things go as even though there are really important concepts for things like storage ... understanding filesystems, RAID, etc. it is ignored because most people are searching for the "newest" and "coolest" things. 

I don't know that I've seen things about hdparm for disk analysis and setting various parameters or filesystem tuning, but all of these things are important and what's more, even with software defined storage, you are still leveraging a physical disk, so knowing things from a lower level can help and significantly improve performance of your storage array no matter how it is implemented.

Take Ceph for example, if you provide block devices, you will still need to format that CEPH provided block device (RBD). So while that is a high-performance network block device provided by CEPH it is still getting partitioned/formatted for the OS to use it. What filesystem do you use? What block size do you use? Are you providing JUMBO frames to the system to eliminate network bottlenecks? Are you using high-speed bonded connections?

How is CEPH configured? Thin provisioned? CEPH object size changed?

So you might think about XFS and matching CEPH

mkfs.xfs -d su=4k,sw=1024 /dev/rbd0  (Matches CEPH 4MB object size) so now the filesystem on the OS will match CEPH.

Think about older Windows systems with scandisk and DEFRAG. 

What about FSTAB and mount options?

/dev/rbd0 /mnt/ceph-rbd xfs defaults,noatime,nodiscard,logbsize=256k 0 0

What about things like TRIM? Most people think about this only for SSDs although not discussed in our learning anymore ... but think about it for CEPH ...

XFS file gets deleted from a thin-provisioned CEPH. The only way CEPH knows is a discard signal must be sent, but we've optimized and have a "no discard" because it is slow as it is a synchronous operation in CEPH, so now we are stuck ... or are we?

You can on the client-side use fstrim.timer which is a systemd timer that can run once a week or something and clean up CEPH to reclaim the space. Now you aren't doing the "TRIM" each time a file is deleted.

 

Travis Michette, RHCA XIII
https://rhtapps.redhat.com/verify?certId=111-134-086
SENIOR TECHNICAL INSTRUCTOR / CERTIFIED INSTRUCTOR AND EXAMINER
Red Hat Certification + Training
Join the discussion
You must log in to join this conversation.