There is a critical step when creating OS images or templates for use in image-based provisioning systems such as those embedded in most virtualization platforms. That step is cleaning up residual instance-specific data from the base, or golden, image. That step is called ‘sysprep’ in the Windows administration terminology. Failing to do so can lead to various problems such as the provisioned hosts failing to boot, filing to automatically gain network connectivity or subtle identification issues when trying to embed the provisioned host into network-wide distributed systems.
Recent changes in the low-level plumbing of Linux systems, mostly due to the switch to systemd-based system and service management from System-V based management, necessitate some updates the the procedures used to perform golden image cleanup. This post documents the various steps needed to clean up a RHEL7 golden image. While I haven’t tested them directly on such systems, similar steps should apply to Fedora and CentOS 7 systems as well. Continue reading
Docker seems to be all the rage this days, everyone seems to be running around integrating it, building things on top of it and generally giving it great press. It is no surprise then that I decided I should look into what this is all about.
The one bit of information I found somewhat less frequently discussed is where everything gets stored.
Storage is important. Disk partitioning is the first task any OS installer puts you through, even before that, an experienced sysadmin pays great attention to what kind of storage devices and channels go into a server. Data storage decisions have great effect on how your system end up performing, how robust is it as well how easy is it to backup and repair when it breaks. Bad storage decisions tend to be hard to fix, necessitating large data transfers and long downtimes. Indeed, allowing a sysadmin to fix bad storage decisions is where LVM, Veritas Volume Manager and other storage visualization tools come from.
GlusterFS has been getting a lot of attention recently with RedHat’s decision to integrate it with Hadoop. While is is one of many similar open source distributed file systems, RedHat’s solid backing seems to promise a solid future for GlusterFS.
I’m wondering if it may be time for me to play with it a little, I’m wondering whether I should try and used it to synchronize and backup files on my home computers.
System administrators that deploy tools such as RHEL’s Kickstart are typically concerned with rapidly deploying large numbers or servers, therefore it is quite unfortunate that Kickstart has only very basic network configuration support. What it means is that sysadmins have had to resort to manually configuring IP addresses and NIC Bonding for each and every installed server.
Cobbelr’s Advanced Networking feature seems to suggest a solution for this problem. It seem to me, however, that the approach taken is impractical for large organizations. Cobbler’s approach is to have the sysadmin use the Cobbler command line tool feed in the configuration for each and every NIC on the new server, prior to server installation and based on NIC MAC addresses.
This approach is impractical because the last thing a sysadmin faced with installing dozens of servers wants to to is to boot each and every server with one tool or another in order to check what the MAC addresses are, might as well manually configure the servers once they are already installed with a operating system…
The approach we’ve taken in my organization was to develop our own internal tool that automatically performs network configuration based on detecting where the various NICs are connected to by pinging well-known IP addresses. This approach has an additional benefit in that it can be used to quickly reconfigure the server when faulty NICs or motherboards are replaced (E.g. when the MAC addresses change).