Distributed File System Choices: Red Hat Storage, GFS2, & pNFS

Red Hat has several options for storage needs — GFS2, CIFS, (p)NFS, Gluster.  It’s all about right tool for the job.

http://www.redhat.com/summit/sessions/index.html#103

RHEL Resilient Storage – GFS2

Shared storage, scales up to 100TB per instance, supports 2-16 nodes in the cluster, x86_64 only.

Performance is directly related to server and storage class, and to access sets — best case is where each node generally access its own filesystem area, worst case is where nodes are fighting for locks of the same files/directories.

RHEL NFS Client

It’s an NFS client — what else needs to be said?

pNFS

Metadata server tells you where to go to get your data, and can even reference data across multiple-tiers — NFS or directly to the SAN.

Performance can be enhanced with large write caches, internally tiered storage, and is suitable for most workloads, even transactional DBs (using O_DIRECT).  Not so good for highly random read workloads.

Red Hat Storage (Gluster + utilities, support, and blueprint)

RHS is a standalone product.  RHS servers in supported config are RAID6+local storage, XFS filesystems, dedicated RHEL6.x and Gluster software, all deployed on commodity hardware (reducing cost).

Clients use Gluster for scalability; NFS or CIFS for simplicity.

Performance improved by scaling horizontally across servers.  But, there is no write cache, and Gluster is a user=space filesystem with some overhead from context switching.  Likely not suitable for big I/O (databases, mail servers), but great for big unstructured data.

Scales to 6PB (or 3PB mirrored), and can add capacity dynamically.

SELinux for Immortals

SELinux provides *very* strong sandboxing capabilities.  In very simplistic terms — access control can now be applied not just at the filesystem, but also across network access, X access, enforced and automatic chroots that are cleaned-up when the process ends, all with fine-grained audit logging.

But — this ain’t your grandma’s chmod.  There is some significant complexity.  But, depending in your risk model, it may very well be worth it.

Note — check out the sandbox util (policycoreutils-python) with the X capabilities (policycoreutils-sandbox).  Provides a great tool for running individual processes under extremely tight lock-down.

Application High Availability in Virtual Environments

http://www.redhat.com/summit/sessions/index.html#394

Great discussion around Red Hat’s solutions for clustering, fencing, etc, in virtualized environments.

Fencing is /very/ important for shared resources, especially disk.  In a virtualized world (RHEV, VMWare, etc), fencing tools can reach right into the hypervisor to kill a failed node in a cluster.  Similarly, ILO, RSA, DRAC, etc can be used to kill power to physical servers.  Either way, before another node in a cluster takes over the shared resource, it is *critical* that the other node is killed.  But obviously — this is an easy way to shoot yourself in the foot.  As the presentors just said – “test, test, and test some more” to make sure you fencing parameters align with your deployment.

Simplified VDI with Red Hat Enterprise Virtualization for Desktops

The Red Hat VDI solution has come a long way … client-side rendering and media off-load, built right on top of the RHEV stack (no separate infrastructure!), user portal is part of the package (no additional purchase!).

Comparisons between VMWare View and XenDesktop show roughly functionality/feature parity, but Red Hat VDI appears *much* less expensive, and can provide both Windows and Linux virtual desktops.

http://www.redhat.com/summit/sessions/index.html#5

And checkout the Red Hat TCO/ROI comparison tool:

https://roianalyst.alinean.com/ent_02/AutoLogin.do?d=482903301639024770

However – a critical feature is still missing.  While Red Hat VDI looks like a great replacement for desktops, there is no iPad client yet.  For many, this may be the killer.  It is on the near-future roadmap though!

Adopt the cloud, kill your IT career

http://www.infoworld.com/print/195144

…You might be eager to relinquish responsibility of a cranky infrastructure component and push the headaches to a cloud vendor, but in reality you aren’t doing that at all. Instead, you’re adding another avenue for the blame to follow. The end result of a catastrophic failure or data loss event is exactly the same whether you own the service or contract it out. The difference is you can’t do anything about it directly. You jump out of the plane and hope that whoever packed your parachute knew what he or she was doing….

Using the UITS SSH Gateway

Adding the following to your ~/.ssh/config will cause all SSH access to servers named *.uits.uconn.edu to hop through ssh.uits.uconn.edu, authenticating as your NetID.  Note that if you have kinit’d as <NETID>/admin, or if you have copied your public SSH key to ssh.uits.uconn.edu and are using ssh-agent, this will be transparent.

Host ssh.uits.uconn.edu
    ProxyCommand none

Host *.uits.uconn.edu
    ProxyCommand ssh -A <NETID>@ssh.uits.uconn.edu exec nc %h %p

Cross-posting …

Took me 15 minutes of searching and trying, but I eventually figured out how to get all posts tagged “linux” from my personal blog to be cross-posted here.

Check out the FeedWordPress plugin for WordPress — simply supply the remote RSS URL, and FeedWordPress will import posts directly into your WordPress site.