What a year!!

Two years at Red Hat have flown by at such a rapid pace; I’m even a couple months late in blogging about it! To top off another year full of great projects, awesome technology, and incredible people, I’ve had a couple exciting things happen that I wanted to share.

Last month I was chosen to be a “Red Hat Chairman’s Award” recipient, and last week (photo below) I was awarded “North American Account Solutions Architect” of the Year!

image

I am truly humbled by both of these awards, and was speechless when I heard my name called (those who know me well understand how rare that is). I am looking forward to see what this next year will bring!

More time in Raleigh, more snow in Connecticut!

So yet again, this blog post comes as I sit in the RDU airport, leaving behind sunny blue skies and ~50 degree temperatures to arrive just before midnight in cold, wet snow. — it is good to get home!  I’ve been in Raleigh again for a couple days, this time for Red Hat IdM (Identity Management( training, and Value Selling training.  Both were great sessions — of course I enjoy good technical sessions, but I learned quite a bit from the Sales class too.  It really opened my eyes to a few bad habits, which I can now work to address.
image

Red Hat’s IdM is pretty cool — I wish I had know more about it sooner.  It comes with Red Hat Enterprise Linux, and provides a unified LDAP, Kerberos, DNS, and CA solution.  Plus, when used in combination with SSSD, you get a nicely managed user/group/policy solution consistent across your RHEL servers and desktops (and to a lesser extent, other *nix systems that support Kerberos and LDAP).  I have been a longtime proponent of OpenLDAP and MIT Kerberos, and still believe they are each very powerful solutions — but Red Hat’s IdM is a very cohesive suite that ties multiple functions and technologies together very well.

Now to really get down to studying for the upcoming RHCE exam — wish me luck!

New York City

image

I don’t get into New York City too often, but I had the opportunity this week to check out Red Hat’s office.  I took the Amtrak in and out of Penn Station, and a quick subway hop to the office.  Just a stone’s throw from Wall St, Red Hat is waaaay up on the 24th floor.  I got to meet most of the SA team for the NE region, and we headed over to Suspenders for dinner, follow by drinks at the Pegu Club, and just had a real good time talking for several hours.

What a smart, active bunch — it is awesome ( but still a bit intimidating ) to be part of this!

Amtrak Rocks!

I’m taking the train from New London into Penn Station.  I haven’t been on a train in years, but this is great!  Free WiFi, free power, smooth ride — I’m getting a ton of work done.  I’ll definitely be doing this more!

Done with New Hire Orientation

Time to update the WordPress headshot — I now have my Red Fedora!

rhfaceI’m sitting in the RDU airport, wrapping up three days in Raleigh – two for New Hire Orientation, and one for training on the sales tool.  Ran into David Huff today in the new Red Hat Tower, who echoed a common theme that I’m hearing from Red Hat’rs, triggering this blog post.  Here’s a few of the things I’ve heard:

  • “I love it here!”
  • “This is an awesome place!”
  • “I’m where I want to be!”

There is an excitement here, people working towards common goals, within a common set of philosophies, understanding how they each contribute to the successes of the company.  The work is interesting, challenging, even enjoyable, and the goals are admirable.

Every session at orientation ended with the presenter saying “Welcome to Red Hat!” — and I believe they mean it!

I am incredibly excited to be on board – what a great place to be!

UPDATED WITH SLIGHTLY IMPROVED PHOTO OF MY NEW FEDORA — THANKS JOSH ;-p  

Build a PaaS using Open Source Software

Discussion about OpenShift.  OpenShift has been fully open-sourced, available on GitHub for local deployment, or directly usable as a hosted solution.

Rule #1: IaaS != PaaS

Virtual machines : Application is not necessarily 1:1

Rule #2: PaaS is not a silver bullet

Great for Self-service deployment of applications, varied volatile workloads (development, testing, scale-up/out), with tightly constrained application rules — which implies standardized deployments from template.

Rule #3: PaaS is about developers — AND OPERATIONS!!!!

Operations becomes about capacity planning, not ticket-drive activities.

Rule #4: Be ready to learn

Developers want languages variety, scaling models, integration models — and they want it automagically

Operations want multi-tenancy, familiar installation, and sane configurations — all reproducible.

What is an application?

Runtime (OpenShift cartridges)

Code (One Git repository per application)

Creating an App

The rhc tools are used to create a namespace (domain), then an application space which includes a name and cartridge type, and push the code.

What do you get from public OpenShift?

A slice of the server, a private Git repository, deployment access.

The PaaS service is comprised of a Broker (director front-end, RESTful) and Nodes.  Each node has multiple “gears” (containers secured with SELinux, constrained with cgroups, and isolated with Kernel namespaces and Bind Mounts).

Extending OpenShift

Custom DNS plugins, auth plugs, security policies, and community cartridges.  Quick-start frameworks can be offered to community too.

LXC and SELinux are the future for isolating and securing OpenShift…

… but right now, there are a many moving parts being used to provide isolation and security.

PaaS demans a new security model

DAC just won’t cut-it, too complicated for PaaS.  MAC (SELinux!) is necessary.

Step 1 – Unlearn this (and embrace SELinux)!

setenforce 0

Step 2 – Learn the ‘Z’ (to see SELinux contexts)

ls -lZ
ps -efZ

(Review of SELinux contexts and syntax provided)

http://fedoraproject.org/wiki/SELinux

Demo – deployment of WordPress to OpenShift, in a VirtualBox LiveCD

The OpenShift QuickStart is available here: https://github.com/openshift/wordpress-example

Campground: CloudForms + Splunk

Great co-hosted Red Hat & Splunk discussion about CloudForms-Splunk integration!

Goal: Measure CloudForms utilization by date/time, by user, by cloud povider, and totals.

Simple rsyslog config to send the right data over into Splunk, then just add the “Splunk for Red Hat CloudForms” app — the metrics stated in the above goal are there, right out of the box.  It really is (or at least seems to be) that easy!

And yes — the Splunk guys know Steve Maresca (and UConn) *very* well.

Distributed File System Choices: Red Hat Storage, GFS2, & pNFS

Red Hat has several options for storage needs — GFS2, CIFS, (p)NFS, Gluster.  It’s all about right tool for the job.

http://www.redhat.com/summit/sessions/index.html#103

RHEL Resilient Storage – GFS2

Shared storage, scales up to 100TB per instance, supports 2-16 nodes in the cluster, x86_64 only.

Performance is directly related to server and storage class, and to access sets — best case is where each node generally access its own filesystem area, worst case is where nodes are fighting for locks of the same files/directories.

RHEL NFS Client

It’s an NFS client — what else needs to be said?

pNFS

Metadata server tells you where to go to get your data, and can even reference data across multiple-tiers — NFS or directly to the SAN.

Performance can be enhanced with large write caches, internally tiered storage, and is suitable for most workloads, even transactional DBs (using O_DIRECT).  Not so good for highly random read workloads.

Red Hat Storage (Gluster + utilities, support, and blueprint)

RHS is a standalone product.  RHS servers in supported config are RAID6+local storage, XFS filesystems, dedicated RHEL6.x and Gluster software, all deployed on commodity hardware (reducing cost).

Clients use Gluster for scalability; NFS or CIFS for simplicity.

Performance improved by scaling horizontally across servers.  But, there is no write cache, and Gluster is a user=space filesystem with some overhead from context switching.  Likely not suitable for big I/O (databases, mail servers), but great for big unstructured data.

Scales to 6PB (or 3PB mirrored), and can add capacity dynamically.

SELinux for Immortals

SELinux provides *very* strong sandboxing capabilities.  In very simplistic terms — access control can now be applied not just at the filesystem, but also across network access, X access, enforced and automatic chroots that are cleaned-up when the process ends, all with fine-grained audit logging.

But — this ain’t your grandma’s chmod.  There is some significant complexity.  But, depending in your risk model, it may very well be worth it.

Note — check out the sandbox util (policycoreutils-python) with the X capabilities (policycoreutils-sandbox).  Provides a great tool for running individual processes under extremely tight lock-down.