Cutting in the Middleman, with Comments

I blogged somewhat recently about my interest in, and inaction around, static site blogging, where you write blog posts, use an app to turn them into plain HTML, and then drop them somewhere on the web, with no shadow of potentially/eventually vulnerable PHP and MySQL cranking away to deliver dynamically what needn’t be dynamic.

I hadn’t yet pulled the trigger on ditching WordPress yet, preferring instead to satisfy my desire for writing posts in plain AsciiDoc-formatted text by copying and pasting rendered AsciiDoc into WordPress, or using this AsciiDoc-to-WordPress script to pump in posts through the WordPress API.

Mainly, what I was missing was for one of my bad ass colleagues to take the crazy box of lego pieces that get dumped out in front of your feet when you ask Google about static site blogging, make some smart choices, and build something that I could come along and tinker with. I mentioned before that I messed around with Awestruct and found it way too raw for me. After their own more able-minded examination, my colleagues agreed, and came forward with Middleman.

Middleman It Is, But…

After poking a bit through Middleman, I felt comfy enough to adapt it for my own, extremely simple blog. I got a basic layout in place, and set about converting my WordPress posts into something workable for Middleman. My plan was to use AsciiDoc for my new writing, but most conversion scripts target the more popular Markdown. I found a script — I’ll look for the link — that did an OK job converting, but I had to delete some of the "front matter" bits that I didn’t need, and a few of my URLs rendered wrong. I’ve tried a few different tools for WordPress-to-SomethingStatic conversion, and they’ve all needed some hand-tweaking. So, low-frequency blogging FTW! I didn’t have too many posts to hand-tweak.

Now on to a REAL problem — comments. One arguably important dynamic chore tackled by WordPress is accepting and managing blog comments. Most static blogs either do away with comments all together (easy to steel yourself for this decision after reading comments at Youtube or your local newspaper’s web site for five minutes) or, sites go with the hosted Disqus comments service.

I’ve bounced between Disqus and WordPress comments in the past, and have been happy with Disqus. They take the load off your site, and allow your page (with the help of something like wp super cache) to be mostly static, since all the dynamism happens, in javascript, in your reader’s browser. Also, I like the way that Disqus knits siloed discussions from all over the web into something a bit more unified. You have posts and comment threads spread everywhere, and Disqus sort of pulls them together, and, through easy options for tweeting out a link to your comment, offers a way to pull in others.

Switching from WordPress comments to Disqus comments means switching from a possibly self-hosted system to a definitely not self-hosted system, and that’s a concern for many, particularly given the greater chances for privacy chicanery at sites out of your control. However, Disqus does a really good job importing from and exporting to WordPress, so even though I’ve swapped back and forth a few times, I’ve never had trouble getting my mitts back on my data, and that’s my number one concern with using a hosted service.

BUT, there’s still another important issue. WordPress is open source software, and Disqus is not. I’m big on open source software — I’m not opposed to using anything proprietary, not sure how I’d use my oven with a no-proprietary-ever stance, but I’m keen to see open source spread, so swapping something that’s already open to something that is not is a concern.

Enter Juvia, and OpenShift (natch)

As usual, I approached the oracle of Google and, in fairly short order, was directed to Juvia, "a commenting server similar to Disqus and IntenseDebate." It sounded perfect, and not completely abandoned, although the demo site wasn’t working, and its discussion forum (served from the terrible terrible why-does-anyone-use-this Google Groups) appears to have been wiped from the earth. Why not more activity around what appears to be a much-needed project?

It may be because Juvia is a Ruby on Rails app, and while mysql/php hosting is handed down from the sky at little or no cost, ruby hosting is not. I saw one discussion of Juvia v. Disqus in my travels that boiled down to: "You could use Juvia, but hosting costs, so, use Disqus, which is free."

But, that gentleman mustn’t have been aware of OpenShift, where you can host all sorts of different apps in the service’s free tier. I turned again to Google and found a few Juvia on OpenShift quickstarts. I used this one, altough this one seems more official, though a bit less up-to-date.

I spun up Juvia in one of my OpenShift gears, spun up another just to host my static blog files, and poked at my layout HAML until I got them working together. I used Juvia’s WordPress comments importer to import my WordPress comments (which took some work), and here I am.

Now, I am going to write all this up into a how to, but I need to do a bit more polishing — you don’t want to follow the steps I followed, you want to follow the steps I would have followed, had future me paid me a visit first.

Till then, though, this is my first new, non-stub post in the new blog. With open source, self-hosted comments.

Read

Foo! A Foo Post, Sucka!

All right, y’all, this here is a stub post.

Not bad so far. Now, I need to, hmm, actually…

Might as well make note of some things I’d like to change here.

  • maybe change some of the styles, fonts, I’d kind of like to swap for the asciidoctor styles

  • pick a CC license

  • hook up some deploy goodness

  • change up the way the date shows in the post title, kind of lame-looking now

Read

AsciiDokken

It’s been a long time since I’ve blogged. My last oVirt 3.2 howto has been holding down the front page of this site for a lot of months, and now oVirt 3.3 is just around the corner.

asciidokken

Top “haven’t blogged” excuses:

  • Such are blogs, they go unupdated, and blog posts often start with “it’s been a long time since I blogged” (see above).

  • I’ve been expending a bit of my blogging chi by robotically filling and tweaking the links queue that feeds @redhatopen.

  • I’ve been gripped somewhat by analysis paralysis over staticly generated site blogging and writing in AsciiDoc.

It’s this third excuse I’m blogging about today.

See, I like to write in plain text — I start out writing almost everything in Tomboy or, if I’m feeling extra distracted, PyRoom. The trouble is, plain text isn’t “print” ready (and by print ready, I really mean web ready). Beyond plain text, you need some formatting, at the very least, Web links, a few code blocks, a subhead or two.

Formatting is lame and boring and adds friction to my writing experience. The way I’ve done it, for years, is to do it after the writing’s done, and to undertake a separate formatting pass for every spot I intend to publish — is this for the Web, where on the Web? Mediawiki? WordPress? Other?

I particularly hate writing in word processors, they’re all about formatting, and yet the formatting they produce often isn’t appropriate for most places you’ll end up publishing. For instance, word processors produce famously junky HTML.

Enter AsciiDoc

My collegaue Dan Allen has been spreading the gospel of AsciiDoc, a lightweight plain text markup language, and of Asciidoctor, a Ruby processor for converting AsciiDoc source files and strings into HTML 5, DocBook 4.5 and other formats.

With my plain text orientation, annoyance with formatting gunk, and deep dissatisfaction with word processors, AsciiDoc appealled to me. I know that Markdown is teh hotness, sort of, but AsciiDoc’s formatting for my #1 use case, inserting hyperlinks, is simpler than that for Markdown, and AsciiDoc seems better aligned with my needs overall.

As Dan promised, I found it very easy to get rolling with AsciiDoc. You just write, the formatting is simple, and you can do all the sorts of things you need to do, the first time through.

It’s simple to add links and images, and AsciiDoc’s handling of bullets and numbering has made life easier writing posts and howtos.

In fact, after writing in AsciiDoc for the past couple months, I found the other day that I had to look up the syntax for HTML link tags. In AsciiDoc, it’s URL[text] and that’s it.

BUT, while you can just start writing in AsciiDoc, you do need some application support to get the full benefit from it. For instance, it’s helpful to get a preview of how your formatted text will render, particularly while learning the syntax. My text editing tools don’t offer this for AsciiDoc, though I’ve been pleased with the setup suggested in this Editing w/ Live Preview howto on the Asciidoctor site.

The biggest issue, however, is publishing. My blog runs on WordPress, as do a few of the blogs I contribute to for work, and WordPress doesn’t know anything about AsciiDoc. There is, however, a family of blogging engines savvy to AsciiDoc: the Static Site Generators.

Jekyll, Hyde, and Friends

I’ve been interested in the concept of “blogging like a hacker” with a static site generator for some time now. Having a speedy, scaleable blog that needs no software updates and could be hosted from something like Amazon S3 sounds really cool to me.

Now, I love WordPress. I do. It’s this big old ball of open source goodness, with a community of users, plugin developers, designers, bloggers, etc. Honestly, yay!

But…

WordPress Vulnerability of the Day means a constant sense of low-level discomfort — am I up to date? What about my plugins? Are they up to date? And have the latest updates broken compatitbility between plugin and core, somehow?

It’s really easy to get going with a nice, functional blog with WordPress. My blog has always been really simple — I made a child theme based on the WordPress 2012 theme simply to hide the gigantic header image, and I may have made a CSS tweak or two.

But, some of the work-related WordPress sites I’ve been involved with have required more customization, and when you’re trying to understand how all the parts of a WordPress site fit together, to customize or debug something, it feels crazy — everything’s exploded out into a billion different places.

Also, the more I use git (which I really started getting into through OpenShift), the more I want to use it, or have the option of using it, for everything. I want to use git for managing posts and such, and WordPress stores everying in a database.

And returning to the formatting issue, formatting in WordPress can be a pain. It works like a PHP-based word processor in the sky, for the most part, you WYSIWYG your way along, clicking toolbars and such, but I always need to dip into the HTML view and tweak some things, which I don’t love.

My blog isn’t very dynamic, so I don’t need a bunch of PHP code cranking away at every click. I’ve been using Disqus comments, where the dynamic bits happen in the visitor’s browser, so my site could easily be static. In fact, I use wp-super-cache on my site, for performance benefit, so my blog is sort of static anyway.

So, between my interest in AsciiDoc and static site generators, and my itching to make a move from WordPress, I figured I’d soon jump from WordPress, to… something else.

I’ve fiddled with a few different options, including Octopress, Pelican, Hyde, and Awestruct (another project I hear about through Dan Allen).

None of these have been super tough to get up and running, but as with all static site generators, there’s some assembly required, and I have plenty of other bits of software to fiddle with.

Converting my posts from WordPress to Awestruct et al is a thing, too, so I’d have to deal with (re)formatting those posts before I started using AsciiDoc for my workflow, and that means worrying about formatting and other distraction before I can start not worrying about formatting and other distraction.

So there’s the blog/writing/workflow/migration holding pattern for you.

AsciiDokken

I mentioned, though, that I’ve been using AsciiDoc for a couple months now, and this blog and others are running WordPress. I’ve been using a little tool for posting AsciiDoc-formatted texts to WordPress, which has enabled me to start blogging in AsciiDoc without blogging like a hacker. It works pretty well, and handles image uploading, which is nice.

I keep my AsciiDoc-formatted posts in a folder on my notebook, with git version control, and I push posts and post updates to WordPress through its API, using the blogpost tool.

Just the other day, I spun myself a fresh WordPress blog on OpenShift, with this spiffy new 2013 theme (where disabling the giant header image is an out-of-the-box customization option).

So, maybe I’m staying with WordPress for a while.

At least, I shouldn’t let indecision over markup and site generation block the flow of public navel-gazing about indecision over markup and site generation. To that end, I’ve started looking into directing more love toward that AsciiDoc-to-WordPress uploader.

Read

Up and Running with oVirt, 3.2 Edition

I’ve written an updated version of this howto for oVirt 3.3 at the Red Hat Community blog.

The latest version of the open source virtualization platform, oVirt, has arrived, which means it’s time for the third edition of my “running oVirt on a single machine” blog post. I’m delighted to report that this ought to be the shortest (and least-updated, I hope) post of the three so far.

When I wrote my first “Up and Running” post last year, getting oVirt running on a single machine was more of a hack than a supported configuration. Wrangling large groups of virtualization hosts is oVirt’s reason for being. oVirt is designed to run with its manager component, its virtualization hosts, and its shared storage all running on separate pieces of hardware. That’s how you’d want it set up for production, but a project that requires a bunch of hardware just for kicking the tires is going to find its tires un-kicked.

Fortunately, this changed in August’s oVirt 3.1 release, which shipped with an All-in-One installer plugin, but, as a glance at the volume of strikethrough text and UPDATE notices in my post for that release, there were more than a few bumps in the 3.1 road.

In oVirt 3.2, the process has gotten much smoother, and should be as simple as setting up the oVirt repo, installing the right package, and running the install script. Also, there’s now a LiveCD image available that you can burn onto a USB stick, boot a suitable system from, and give oVirt a try without installing anything. The downsides of the LiveCD are its size (2.1GB) and the fact that it doesn’t persist. But, that second bit is one of its virtues, as well. The All in One setup I describe below is one that you can keep around for a while, if that’s what you’re after.

Without further ado, here’s how to get up and running with oVirt on a single machine:

HARDWARE REQUIREMENTS: You need a machine with x86-64 processors with hardware virtualization extensions. This bit is non-negotiable–the KVM hypervisor won’t work without them. Your machine should have at least 4GB of RAM. Virtualization is a RAM-hungry affair, so the more memory, the better. Keep in mind that any VMs you run will need RAM of their own.

It’s possible to run an oVirt in a virtual machine–I’ve taken to testing oVirt on oVirt itself most of the time–but your virtualization host has to be set up for nested KVM for this to work. I’ve written a bit about running oVirt in a VM here.

SOFTWARE REQUIREMENTS: oVirt is developed on Fedora, and any given oVirt release tends to track the most recent Fedora release. For oVirt 3.2, this means Fedora 18. I run oVirt on minimal Fedora configurations, installed from the DVD or the netboot images. With oVirt 3.1, a lot of people ran into trouble installing oVirt on the default LiveCD Fedora media, largely due to conflicts with NetworkManager. When I teseted 3.2 with the With 3.2, the installer script disabled NM on its own, but I had to manually enable sshd (sudo service sshd start && sudo chkconfig sshd on).

A lot of oVirt community members run the project on CentOS or Scientific Linux using packages built by Andrey Gordeev, and official packages for these “el6” distributions are in the works from the oVirt project proper, and should be available soon for oVirt 3.2. I’ve run oVirt on CentOS in the past, but right now I’m using Fedora 18 for all of my oVirt machines, in order to get access to new features like the nested KVM I mentioned earlier.

NETWORK REQUIREMENTS: Your test machine must have a host name that resolves properly on your network, whether you’re setting that up in a local dns server, or in the /etc/hosts file of any machine you expect to access your test machine from. If you take the hosts file editing route, the installer script will complain about the hostname–you can safely forge ahead.

CONFIGURE THE REPO: Somewhat confusingly, oVirt 3.1 is already in the Fedora 18 repositories, but due to some packaging issues I’m not fully up-to-speed on, that version of oVirt is missing its web admin console. In any case, we’re installing the latest, 3.2 version of oVirt, and for that we must configure our Fedora 18 system to use the oVirt project’s yum repository.

sudo yum localinstall http://ovirt.org/releases/ovirt-release-fedora.noarch.rpm

SILENCING SELINUX (OPTIONAL): I typically run my systems with SELinux in enforcing mode, but it’s a common source of oVirt issues. Right now, there’s definitely one (now fixed), and maybe two SELinux-related bugs affecting oVirt 3.2. So…

sudo setenforce 0

To make this setting persist across reboots, edit the ‘SELINUX=’ line in  /etc/selinux/config to equal 'permissive'.

INSTALL THE ALL IN ONE PLUGIN: The package below will pull in everything we need to run oVirt Engine (the management server) as well as turn this management server into a virtualization host.

sudo yum install ovirt-engine-setup-plugin-allinone

RUN THE SETUP SCRIPT: Run the script below and answer all the questions. In almost every case, you can stick to the default answers. Since we’re doing an All in One install, I’ve tacked the relevant argument onto the command below. You can run “engine-setup -h” to check out all available arguments.

One of the questions the installer will ask deals with whether and which system firewall to configure. Fedora 18 now defaults to Firewalld rather than the more familiar iptables. In the handful of tests I’ve done with the 3.2 release code, I’ve had both success and failure configuring Firewalld through the installer. On one machine, throwing SELinux into permissive mode allowed the Firewalld config process to complete, and on another, that workaround didn’t work.

If you choose the iptables route, make sure to disable Firewalld and enable iptables before you run the install script (sudo service firewalld stop && sudo chkconfig firewalld off && sudo service iptables start && sudo chkconfig iptables on).

sudo engine-setup --config-allinone=yes

TO THE ADMIN CONSOLE: When the engine-setup script completes, visit the web admin console at the URL for your engine machine. It will be running at port 80 (unless you’ve chosen a different setting in the setup script). Choose “Administrator Portal” and log in with the credentials you entered in the engine-setup script.

From the admin portal, click the “Storage” tab and highlight the iso domain you created during the setup-script. In the pane that appears below, choose the “Data Center” tab, click “Attach,” check the box next to your local data center, and hit “OK.” Once the iso domain is finished attaching, click “Activate” to activate it.

Now you have an oVirt management server that’s configured to double as a virtualization host. You have a local data domain (for storing your VM’s virtual disk images) and an NFS iso domain (for storing iso images from which to install OSes on your VMs).

To get iso images into your iso domain, you can copy an image onto your ovirt-engine machine, and from the command line, run, “engine-iso-uploader upload -i iso NAMEOFYOURISO.iso” to load the image. Otherwise (and this is how I do it), you can mount the iso NFS share from wherever you like. Your images don’t go in the root of the NFS share, but in a nested set of folders that oVirt creates automatically that looks like: “/nfsmountpoint/BIGOLEUUID/images/11111111-1111-1111-1111-111111111111/NAMEOFYOURISO.iso. You can just drop them in there, and after a few seconds, they should register in your iso domain.

Once you’re up and running, you can begin installing VMs. I made the “creating VMs” screencast below for oVirt 3.1, but the process hasn’t changed significantly for 3.2:

Read

Gluster Rocks the Vote

Rock the Vote needed a way to manage the fast growth of the data handled by its Web-based voter registration application. The organization turned to GlusterFS replicated volumes to allow for filesystem size upgrades on its virtualized hosting infrastructure without incurring downtime.

Over its twenty-one year history, Rock the Vote has registered more than five million young people to vote, and has become a trusted source of information about registering to vote and casting a ballot.

rtv

Since 2009, Rock the Vote has run a Web-based voter registration application, powered by an open source rails application stack called Rocky.

I talked to Lance Albertson, Associate Director of Operations at the Oregon State University Open Source Lab and primary technical systems operation lead for the service, about how they’re using Gluster to provide for the service’s growing storage requirements.

“During a non-election season,” Albertson explained, “the filesystem use and growth is minimal, however during a presidential election season, the growth of the filesystem can be exponential. So with Gluster we’re trying to solve the sudden growth problem we have.”

Rock the Vote’s voter registration application is served from a virtual machine instance running Gentoo Hardened, with a pair of physical servers running CentOS 6 with Gluster 3.3.0 to host voter registration form data. The storage nodes host a replicated GlusterFS volume, which the registration front end accesses via Gluster’s NFS mount support.

The Gluster-backed iteration of the voter registration application started out in September with a 100GB volume, which the team stepped up incrementally to 350GB as usage grew in the period leading up to the election.

Before implementing Gluster for their storage needs, Rock the Vote’s application hosting team was using local storage within their virtual machines to store the voter form data, which made it difficult to expand storage without bringing their VMs down to do so.

The hosting team shifted storage to an HA NFS cluster, but found the implementation fragile and prone to breakage when adding/removing NFS volumes and shares.

“Gluster allowed us more flexibility in how we manage that storage without downtime,” Albertson continued, “Gluster made it easy to add a volume and grow it as we needed.”

Looking ahead to future election seasons, and forthcoming GlusterFS releases, Albertson told me that the Gluster attribute he’s most interested in is limited-downtime upgrades between version 3.3.0 and future Gluster releases. Albertson is also looking forward to the addition of multi-master support in Gluster’s geo-replication capability, an enhancement planned for the upcoming 3.4 version.

Read

oVirt on oVirt: Nested KVM Fu

I’m a big fan of virtualization – the ability to take a server and slice it up into a bunch of virtual machines makes life trying out and writing about software much, much easier than it’d be in a one instance per server world.

Things get tricky, however, when the software you want to try out is itself intended for hosting virtual machines. These days, all of the virtualization work I do centers around the KVM hypervisor, which relies on hardware extensions to do its thing.

Over the past year or so, I’ve dabbled in Nested Virtualization with KVM, in which the KVM hypervisor passes its hardware-assisted prowess on to guest instances to enable those guest to host VMs of their own. When I first dabbled in this, ten or so months ago, my nested virtualization only sort-of worked – my VMs proved unstable, and I shelved further investigation for a while.

Recently, though, nested KVM has been working pretty well for me, both on my notebook and on some of the much larger machines in our lab. In fact, with the help of a new feature  slated for oVirt 3.2, I’ve taken to testing whole oVirt installs, complete with live migration between hosts, all within a single host oVirt machine. Pretty sweet, since oVirt forms both my main testing platform and one of the primary projects I look to test.

All my tests with nested KVM have been with Intel hardware, because that’s what I have in my labs, but it’s my understanding that nested KVM works with AMD processors as well, and that the feature is actually more mature on that gear.

To join in on the nested fun, you must first check to see if nested KVM support is enabled on your machine by running:

cat /sys/module/kvm_intel/parameters/nested

If the answer is “N,” you can enable it by running:

echo "options kvm-intel nested=1" > /etc/modprobe.d/kvm-intel.conf

After adding that kvm-intel.conf file, reboot your machine, after which “cat /sys/module/kvm_intel/parameters/nested” should return “Y.”

I’ve used nested KVM with virt-manager, the libvirt front-end that ships with most Linux distributions, including my own distro of choice, Fedora. With virt-manager, I configure the VM I want to use as a hypervisor-within-a-hypervisor by clicking on the “Processor” item in the VM details view, and clicking the “Copy host configuration” button to ensure that my guest instance boots with the same set of CPU features offered by my host processor. For good measure, I expand the “CPU Features” menu list and ensure that the feature “vmx” is set to “require.”

virt-manager-nested

Not too taxing, but it turns out that with oVirt, enabling nested virtualization in guests is even easier, thanks to VDSM hooks. VDSM hooks are scripts executed on the host when key events occur. The version of VDSM that will accompany oVirt 3.2 includes a nestedvt hook that does exactly what I described above – it runs a check for nested KVM support, and if that support is found, it adds the require vmx element to your VM’s definition.

I’ve tested this both with oVirt 3.2 alpha, and with the current, oVirt 3.1 version. In the latter case, I simply installed the vdsm-hook-nestedvt package from oVirt’s nightly repository, and it worked fine with the current stable version of vdsm.

ovirtonovirt

I mentioned above that I’ve been able to test oVirt on oVirt in this way, and performance hasn’t been remarkably bad, but I wanted to get a better handle on the performance hit of nesting. I settled, unscientifically, on running mock builds of the ovirt-engine source package, a real life task that involves CPU and I/O work.

I ran the build operation four times on a VM running under oVirt, and four times on a VM running under an oVirt instance which was itself running under oVirt. I outfitted both the nested and the non-nested VM with 4GB of RAM and two virtual cores. I was using the same physical machine for both VMs, but I ran the tests one at a time, rather than in .parallel.

The four builds on the “real” VM averaged out to 14 minutes, 15 seconds, and the build quartet on the nested VM averaged 28 minutes, 18 seconds. So, I recorded a definite performance hit with the nested virtualization, but not a big enough hit to dissuade me from further nested KVM exploration.

Speaking of further exploration, I’m looking very forward to attending the next oVirt Workshop later this month, which will take place at NetApp’s Sunnyvale campus from Jan 22-24.

If you’re in the Bay Area and you’d like to learn more about oVirt, I’d love to see you there. The event is free of charge (much like oVirt itself) and all the agenda and registration details are available on the oVirt project site at http://www.ovirt.org/NetAppWorkshopJanuary_2013. Registration closes on Jan 15th, so get on it!

Read

Gluster User Story: Fedora Hosted

The Fedora Project’s infrastructure team needed a way to ensure the reliability of its Fedora Hosted service, while making the most of their available hardware resources. The team tapped GlusterFS replicated volumes to convert what had been a two-node, active/passive, eventually consistent hosting configuration into a well-synchronized setup in which both nodes could take on user load.

Hosting Fedora Hosted

The Fedora Infrastructure team develops, deploys, and maintains various services for the Fedora Project. One of these services, Fedora Hosted, provides open source projects with a place to host their code and collaborate online.

I talked to the team’s Infrastructure Lead, Kevin Fenzi, about how they’re using Gluster to ensure availability of these services while making the most of their server resources.

Fedora Hosted is served from a pair of virtual instances hosted at serverbeach.com, which donates these resources to the project. The instances run Red Hat Enterprise Linux 6 and maintain a replicated GlusterFS 3.3.0 volume to keep the 50GB of project data stored at Fedora Hosted in sync. The nodes use Gluster’s NFS mount support, which the team found to deliver better performance with the many small files that Fedora Hosted serves.

“Both servers are in DNS, so it’s round robin which one you hit for any given connection. Since the data on the backend is replicated, both of them are up to date at any given time,” Kevin explained. “This way, not only can we handle more load cpu-wise, but if we wish to reboot one node for an update or the like, we simply adjust DNS and there is no outage seen by our projects.”

The Road to Gluster

An earlier incarnation of Fedora Hosted was also run on a pair of virtual instances, one actively serving users and the other a standby kept in sync with an hourly rsync job. If the primary node failed, the standby instance could be brought up in short order, but the hourly sync window meant that the service could suffer an hour or two of data loss.

The Fedora Infrastructure team managed to close this sync window by shifting to a new configuration based on the DRBD project. While this solution dealt with the problem of data loss following an outage, the configuration left one node mostly idle.

The team’s first foray into a GlusterFS-backed configuration for Fedora Hosted turned up a couple of issues with the then-current GlusterFS version 3.2, which the Gluster project addressed in their 3.3 release.

“The Gluster folks were very responsive to our issues and were working on the patch very soon after we requested it,” Kevin explained. “Additionally, 3.3 performance seemed to be much better than 3.2 for our use cases.”

Looking ahead, Kevin and the other members of the Infrastructure team have their eyes set on continued performance enhancements. While the Gluster 3.3-backed Fedora Hosted service has handled its community collaboration load quite well, Kevin pointed out that “we could always want better performance.”

Read

openshift and some php app debugging

This morning I was trying to help figure out why a slick new Mediawiki skin was working just fine on an OpenShift-hosted Mediawiki instance, but was totally borked on a second Mediawiki instance, running on a VPS server.

Both the VPS and OpenShift run on the same OS: Red Hat Enterprise Linux. Both were running the same version of Mediawiki, 1.19.2, both had the same version of PHP: 5.3.3.

I compared the php.ini file from the VPS machine with the php.ini from OpenShift, which is findable at ~/php-5.3/conf/php.ini in your OpenShift gear. (You can ssh into your OpenShift instance at the remote “origin” location in your APPNAME/.git/config file).

I found a handful of differences in the ini files, including the promising-looking “shortopentag” boolean. In my OpenShift app, this was set to “on” and in the VPS, it was set to “off.”

I wanted to fiddle with this setting on OpenShift, to see if I could make the skin break in the same way it was breaking on the VPS, but you can’t modify your app’s php.ini directly in OpenShift. You can, however, change these settings in your .htaccess file.

In my app repo, I created the file “php/.htaccess” including the line ‘phpvalue shortopen_tag “Off”’ to match the VPS server. After pushing this change up to OpenShift, my Mediawiki instance broke in just the same way that it was breaking on the VPS machine. Broken instance FTW!

After swapping the value to “On” and pushing the change again, my test Mediawiki instance was back up and running.

Read

engine-iso-uploader wrinkles

I’ve been installing oVirt 3.1 on some shiny new lab equipment, and I came across a pair of interesting snags with engine-iso-uploader, a tool you can use to upload iso images to your oVirt installation.

I installed the tool on a F17 client machine and festooned the command with the many arguments required to send an iso image off through the network to the iso domain of my oVirt rig. The command failed with the message, “ERROR:root:mount.nfs: Connection timed out.”

I had an idea what might be wrong. The iso domain I set up is hosted by Gluster, and exposed via Gluster’s built-in NFS server, which only supports NFSv3. Fedora 17 is set by default to require NFSv4, and when I changed /etc/nfsmount.conf to make Nfsvers=3, I got around that NFS error – only to hit another, weirder error: “ERROR: A user named vdsm with a UID and GID of 36 must be defined on the system to mount the ISO storage domain on iso1 as Read/Write.”

Vdsm is the daemon that runs oVirt virtualization hosts, so vdsm needs to be able to read and write to the storage domains. I was surprised, though, that the client machine I was using to upload an iso had to have its own vdsm user to do the job. Anyway, I created the vdsm user with the 36.36 IDs, and the command worked.

Engine-iso-uploader does its business with NFS by default, but there’s another option to upload via ssh, which, I imagine, would avoid the need for that vdsm user. I gave it a quick try, hit a new error, ERROR: Error message is “unable to test the available space on /iso1”, and shelved further messing around w/ the tool, for now.

My favored method for getting iso images into my iso domain remains mounting the NFS share and dropping them in there. What I’d really like to see is a way to do this straight from the oVirt web admin console.

Read

A Buzzword-Packed Return to Gluster UFO

A little while back, I tested out the Unified File and Object feature in Gluster 3.3, which taps OpenStack’s Swift component to handle the object half of the file and object combo. It took me kind of a long time to get it all running, so I was pleased to find this blog post promising a Quick and Dirty guide to UFO setup, and made a mental note to return to UFO.

When my colleague John Mark asked me about this iOS Swift client from Rackspace, I figured that now would be a good time to revisit UFO, and do it on one of the Google Compute Engine instances available to me while I’m in my free trial period with the newest member of Google’s cloud computing family. (OpenStack, iOS & Cloud: Feel the Search Engine Optimization!)

That Quick and Dirty Guide

The UFO guide, written by Kaleb Keithley, worked just as quickly as advertised: start with Fedora 16, 17 or RHEL 6 (or one of the RHEL 6 rebuilds) and end with a simple Gluster install that abides by the OpenStack Swift API. I installed on CentOS 6 because this, along with Ubuntu, is what’s supported right now in Google Compute engine.

Kaleb notes at the bottom of his post that you might experience authentication issues with RHEL 6–I didn’t have this problem, but I did have to add in the extra step of starting the memcache service manually (service memcached start) before starting up the swift service (swift-init main start).

The guide directs you to configure a repository that contains the up-to-date Gluster packages needed. I’m familiar with this repository, as it’s the same one I use on my F17 and CentOS 6 oVirt test systems. I also had to configure the EPEL repository on my CentOS 6 instance, as UFO requires some packages not available in the regular CentOS repositories.

I diverged from the guide in one other place. Where the guide asks you to add this line to the  [filter:tempauth] section of /etc/swift/proxy-server.conf:

user_$myvolname_$username=$password .admin

I found that I had to tack on an extra URL to that line to make the iOS client work:

user_$myvolname_$username=$password .admin https://$myhostname:443/v1/AUTH_$myvolname

Without the extra URL, my UFO setup was pointing the iOS client to a 127.0.0.1 address, which, not surprisingly, the iOS device wasn’t able to access.

The iOS Client (and the Android non-client)

Rackspace’s Cloud Mobile application enables users of the company’s Cloud Servers and Cloud Files offering to access these services from iOS and Android devices. I tried out both platforms, the former on my iPod Touch (recently upgraded to iOS 6) and on my Nexus S 4G smartphone (which runs a nightly build of Cyanogenmod 10).

My subhead above says Android non-client, because, as reviewers in the Google Play store and the developer in this github issue comment both indicate (but the app description and [non-existent] docs do not), the current version of the Android client doesn’t work with the recent, Swift-based incarnation of Rackspace’s cloud Files service.

What’s more, the Android version of the client does not allow any modification of one’s account settings. When I was trial-and-erroring my way toward figuring out the right account syntax, this got pretty annoying. Also annoying was the absence of any detailed error messages.

Things were better (albeit still undocumented) with the iOS version of the client, which allowed for account details editing, for ignoring invalid ssl certs, and for viewing the error message returned by any failed API operations.

In the parlance of the above Gluster UFO setup guide, here are the correct values for the account creation screen (the one you reach in the iOS client after selecting “Other” on the Provider screen:

  • Username:    $myvolname:$username

  • API Key:    $password

  • Name:   $whateveryouwant

  • API Url:    https://$myhostname:443/auth/v1.0

  • Validate SSL Certificate:   OFF

After getting those account details in place, you’ll be able to view the Swift/Gluster containers accessible to your account, create new containers, and upload/download files to and from those containers. There were no options for managing permisisons through the iOS client, so when I wanted to make a container world-readable, I did it from a terminal, using the API.

Google Compute Engine

As I mentioned above, I tested this on Google Compute Engine, the Infrastructure-as-a-Service offering that the search giant announced at its last Google I/O conference. I excitedly signed up for the GCE limited preview as soon as it was announced, but for various reasons, I haven’t done as much testing with it as I’d planned.

Here are my bullet-point impressions of GCE:

  • CentOS or Ubuntu – On GCE, for now, you run the instance types they give you, and that’s either CentOS 6 or Ubuntu 10.04. You can create your own images, by modifying one of the stock images and going through a little process to export and save it. This comes in handy, because, for now, on GCE, there are…

  • No persistent instances – It’s like the earlier days of Amazon EC2. Your VMs lose all their changes when they terminate. There is, however…

  • Persistent storage available – You can’t store VMs in persistent images, but you can hook up your VMs to virtual disks that persist, for storing data.

  • No SELinux – The CentOS images come with SELinux disabled. This turned out to be annoying for me, as OpenShift Origin and oVirt both expect to find SELinux enabled. This cut short a pair of my tests. I was able to modify the oVirt Engine startup script not to complain about SELinux, but was then foiled due to…

  • Monolithic kernel (no module loading) – oVirt engine, which I’d planned to test with a Gluster-only cluster (real virt wouldn’t have worked atop the already-virtualized GCE), wanted to load modules, and there’s no module-loading allowed (for now) on GCE. All told, though…

  • GCE is a lot like EC2 – With a bit of familiarity with the ways of EC2, you should feel right at home on GCE. I opened firewall ports for access to port 443 and port 22 using security groups functionality that’s much like what you have on EC2. You launch instances in a similar way, with Web or command line options, and so on.

Read