Build a PaaS using Open Source Software

Discussion about OpenShift.  OpenShift has been fully open-sourced, available on GitHub for local deployment, or directly usable as a hosted solution.

Rule #1: IaaS != PaaS

Virtual machines : Application is not necessarily 1:1

Rule #2: PaaS is not a silver bullet

Great for Self-service deployment of applications, varied volatile workloads (development, testing, scale-up/out), with tightly constrained application rules — which implies standardized deployments from template.

Rule #3: PaaS is about developers — AND OPERATIONS!!!!

Operations becomes about capacity planning, not ticket-drive activities.

Rule #4: Be ready to learn

Developers want languages variety, scaling models, integration models — and they want it automagically

Operations want multi-tenancy, familiar installation, and sane configurations — all reproducible.

What is an application?

Runtime (OpenShift cartridges)

Code (One Git repository per application)

Creating an App

The rhc tools are used to create a namespace (domain), then an application space which includes a name and cartridge type, and push the code.

What do you get from public OpenShift?

A slice of the server, a private Git repository, deployment access.

The PaaS service is comprised of a Broker (director front-end, RESTful) and Nodes.  Each node has multiple “gears” (containers secured with SELinux, constrained with cgroups, and isolated with Kernel namespaces and Bind Mounts).

Extending OpenShift

Custom DNS plugins, auth plugs, security policies, and community cartridges.  Quick-start frameworks can be offered to community too.

LXC and SELinux are the future for isolating and securing OpenShift…

… but right now, there are a many moving parts being used to provide isolation and security.

PaaS demans a new security model

DAC just won’t cut-it, too complicated for PaaS.  MAC (SELinux!) is necessary.

Step 1 – Unlearn this (and embrace SELinux)!

setenforce 0

Step 2 – Learn the ‘Z’ (to see SELinux contexts)

ls -lZ
ps -efZ

(Review of SELinux contexts and syntax provided)

http://fedoraproject.org/wiki/SELinux

Demo – deployment of WordPress to OpenShift, in a VirtualBox LiveCD

The OpenShift QuickStart is available here: https://github.com/openshift/wordpress-example

Migrating Workloads to Red Hat Enterprise Virtualization – a Customer Perspective

Presentation by Qualcomm on their experience migrating from Xen/RHEL5 to KVM/RHEV6.

Straightforward advice — plan, plan, plan, then do.

Qualcomm reduced hardware deployment significantly and simplified management with the RHEV tool suite — significant operational savings.

Qualcomm made extensive use of the virt-v2v tool, but had to modify it (yay Open Source!) to make it cluster (RHCS) aware.  Modifications are shipping with RHEV 6.3.

KVM Technology Review and Roadmap Update

  • KVM is a relatively small piece of code, leveraging Linux for much functionality.  This makes KVM easy to secure and very flexible in meeting future needs.
  • Leveraging Linux means that KVM automatically gains the power of Linux’s hardware support, memory management, network utilities, cgroups, SELinux, etc.
  • Features: RHEL6.3 KVM has all the features of modern hypervisors, without needing 3rd party tools: live snapshots, virtualized disk drivers (VIRTIO), live migration, live block migration, USB passthrough, guest power management, etc.
  • Performance: RHEL 6.3 + KVM holds the top 7 SPECvirt spots on HP and IBM hardware, with metrics showing ~20%+ better performance than VMWare.
  • Single Guest Scalability: Now supports 160 vCPUs and 2TB RAM per guest (with no additional licensing costs!)
  • RHEL 7.0 will include virtual PCI bridges and will have a new Virtio-SCSI block device, enabling thousands of devices per virtual machine.
  • RHEV scales up to 200 host nodes per cluster.
  • Compare the above numbers with VMWare
  • KVM has achieved World Record IOPS: 1,402,720 IOPS on a IBM x3850 X5 for 8KB request using 7 SCSI pass-through devices.. For 1 KB requests, can achieve 1.65M IOPS.
  • RHEV 7 will support Windows power virtualization
  • RHEL 6.3 brings vCPU and memory hotplug to guests
  • KVM has achieved CC-EAL4+ certification with RHEV 5, and is in process of certification with RHEV 6, with sVirt (SELinux wrapped around guests).
  • Decommissioned guest storage can be scrubbed, meeting PCI-DSS standards.
  • Open Virtualization Alliance promotes open source virtualization and KVM ecosystem.

Campground: CloudForms + Splunk

Great co-hosted Red Hat & Splunk discussion about CloudForms-Splunk integration!

Goal: Measure CloudForms utilization by date/time, by user, by cloud povider, and totals.

Simple rsyslog config to send the right data over into Splunk, then just add the “Splunk for Red Hat CloudForms” app — the metrics stated in the above goal are there, right out of the box.  It really is (or at least seems to be) that easy!

And yes — the Splunk guys know Steve Maresca (and UConn) *very* well.

Distributed File System Choices: Red Hat Storage, GFS2, & pNFS

Red Hat has several options for storage needs — GFS2, CIFS, (p)NFS, Gluster.  It’s all about right tool for the job.

http://www.redhat.com/summit/sessions/index.html#103

RHEL Resilient Storage – GFS2

Shared storage, scales up to 100TB per instance, supports 2-16 nodes in the cluster, x86_64 only.

Performance is directly related to server and storage class, and to access sets — best case is where each node generally access its own filesystem area, worst case is where nodes are fighting for locks of the same files/directories.

RHEL NFS Client

It’s an NFS client — what else needs to be said?

pNFS

Metadata server tells you where to go to get your data, and can even reference data across multiple-tiers — NFS or directly to the SAN.

Performance can be enhanced with large write caches, internally tiered storage, and is suitable for most workloads, even transactional DBs (using O_DIRECT).  Not so good for highly random read workloads.

Red Hat Storage (Gluster + utilities, support, and blueprint)

RHS is a standalone product.  RHS servers in supported config are RAID6+local storage, XFS filesystems, dedicated RHEL6.x and Gluster software, all deployed on commodity hardware (reducing cost).

Clients use Gluster for scalability; NFS or CIFS for simplicity.

Performance improved by scaling horizontally across servers.  But, there is no write cache, and Gluster is a user=space filesystem with some overhead from context switching.  Likely not suitable for big I/O (databases, mail servers), but great for big unstructured data.

Scales to 6PB (or 3PB mirrored), and can add capacity dynamically.

SELinux for Immortals

SELinux provides *very* strong sandboxing capabilities.  In very simplistic terms — access control can now be applied not just at the filesystem, but also across network access, X access, enforced and automatic chroots that are cleaned-up when the process ends, all with fine-grained audit logging.

But — this ain’t your grandma’s chmod.  There is some significant complexity.  But, depending in your risk model, it may very well be worth it.

Note — check out the sandbox util (policycoreutils-python) with the X capabilities (policycoreutils-sandbox).  Provides a great tool for running individual processes under extremely tight lock-down.

Application High Availability in Virtual Environments

http://www.redhat.com/summit/sessions/index.html#394

Great discussion around Red Hat’s solutions for clustering, fencing, etc, in virtualized environments.

Fencing is /very/ important for shared resources, especially disk.  In a virtualized world (RHEV, VMWare, etc), fencing tools can reach right into the hypervisor to kill a failed node in a cluster.  Similarly, ILO, RSA, DRAC, etc can be used to kill power to physical servers.  Either way, before another node in a cluster takes over the shared resource, it is *critical* that the other node is killed.  But obviously — this is an easy way to shoot yourself in the foot.  As the presentors just said – “test, test, and test some more” to make sure you fencing parameters align with your deployment.

Simplified VDI with Red Hat Enterprise Virtualization for Desktops

The Red Hat VDI solution has come a long way … client-side rendering and media off-load, built right on top of the RHEV stack (no separate infrastructure!), user portal is part of the package (no additional purchase!).

Comparisons between VMWare View and XenDesktop show roughly functionality/feature parity, but Red Hat VDI appears *much* less expensive, and can provide both Windows and Linux virtual desktops.

http://www.redhat.com/summit/sessions/index.html#5

And checkout the Red Hat TCO/ROI comparison tool:

https://roianalyst.alinean.com/ent_02/AutoLogin.do?d=482903301639024770

However – a critical feature is still missing.  While Red Hat VDI looks like a great replacement for desktops, there is no iPad client yet.  For many, this may be the killer.  It is on the near-future roadmap though!