A little over a year ago I set out to build a system that would deliver application patches to my users without me doing a thing.
I have leveraged AutoPkg, the JAMF patch management system, and API to build a total solution where almost all of my applications are automatically patched on my fleet without me touching a thing.
I call it PatchBot. This will be a series of four blog posts explaining the system and how to get it working. All the code and tools are published on Github.
When a new version of an application is available a package is built and a patch is sent to a test group, after a set delay the package is moved into production where everyone with the app installed gets a patch to update and our application install policy is updated.
Two LaunchAgents automatically run AutoPkg with some custom processors and scripts to perform all the work.
While it does take some setting up for each application the process requires manual intervention only to stop an application patch package going in to production when a problem is found in a package or to speed it up if you need to deploy a security patch quickly.
Patch levels across the fleet have improved dramatically.
AutoPkg is an automation framework for macOS software packaging and distribution, oriented towards the tasks one would normally perform manually to prepare third-party software for mass deployment to managed clients, to quote it’s website.
At it’s core it is used to build the packages, however people have written add-ons to perform other tasks such as integrating with Munki or uploading to a Jamf repository.
The existing add-on (or processor in AutPkg parlance) for integrating with a Jamf repository is
jss-importer. I’ve written a replacement for two reasons. The first is that when I set out to build my first management system
jss-importer could not upload to a cloud repository. The second is that
jss-importer was designed and built around a system of using policies and smart groups to deliver patches to the users and now Jamf has patch management to do it more easily with less reliance on groups. Patch management also includes some nice version tracking across the fleet.
A final note before I delve into details. I am probably doing things in a way that horrifies some people. I’m not going to say that my method is perfect, just that it works for me and I hope you can find my efforts useful in building your own system. I’m also going to spend a great deal of time explaining my code, what it does and why it’s built that way.
Roughly, How Does It All Work
The first thing PatchBot does is build the packages and upload them to Jamf Pro. At the same time it saves the package details in a policy called
TEST-<title>. In a previous version this delivered the test version to the testers but now it’s just a database record.
PatchBot then runs a script that takes the report plist from AutoPkg and uses it to send messages to a special channel in Teams. That’s so humans can know what’s going on.
Once packages are uploaded it’s time to start patch management. This requires a high quality patch definition feed for the Jamf Pro patch management system. I buy Kinobi from Mondada and believe it’s easily a value proposition. Seriously, I cannot overstate how well a bunch of Aussies do it. It’s incredibly finicky and tedious and throwing not much money at somebody else to do it is incredibly appealing when they do such a good job. There is an open source community alternative that I’m sure works fine for some.
The first step in patch management is to find the version definition for our new package and get it pointed to the package, then update a patch policy
Test <title>. This patch policy is scoped to a single group regardless of the application, I call mine
Package Testers. The patch policy has a self service deadline of two days. PatchBot also tells us the results with another set of messages to Teams.
The second step in patch management is to move a package from test into production. This is done seven days after it is moved into test using a production patch policy, called
Stable <title> scoped to all computers and a self service deadline of seven days. Both the delay before moving patches into production and the self service deadlines are easily changed.
At this point PatchBot updates the install policy for the application so it uses the new version. I’m sure you’re not surprised it has a third script to send the results to our Teams channel.
It’s now done. We have a patch package in production and an update install policy. At no stage have we had to do a thing. The only human intervention we might need is halting the shift from test to production if our testers discover a broken package. That’s as easy as editing the self service description for the Test patch policy.
Now for some details. Today I will go over the first step, building and uploading the package.
Building & Uploading Packages
AutoPkg is controlled by recipes so every package we build needs a recipe, called
<title>.pkg.recipe and we either find these online or write them ourselves.
Autopkg includes a security system for recipes that makes sure nobody can change a recipe without us knowing. It does this by saving a special recipe called a recipe override with a hash of the original recipe. Rather than have a separate recipe to run our custom processor,
JPCImporter, I have chosen to add an extra block to the override. You can see an example of this block below. This is not really the approved way of handling it, I should use a different recipe for our custom processor but the overrides have to be there (security, if nothing else) and it reduced the number of files I was handling.
Let’s have a look at an example:
You can see the block added for the custom processor call in lines 40-51 of the recipe override. Further up is the recipe trust info with the hashes of the parent recipes.
So how does JPCImporter do it’s work.
Before it starts we have to do some stuff for our automation. Number one is to make sure our package is named according to a standard format,
<title> is the name of the application with no periods or
- characters in the name. I prefer no spaces but the system works if they’re there. It doesn’t work with underscores between the application name and version, such as the packages built by Rich Trouton’s recipes, for those I have to use a hack to change the name by adding a separate
PkgCopier step to the recipe override to rename it.
The second thing is to create a test policy called
TEST-<title> that is scoped to nobody and not enabled. We are simply using the Jamf Pro policy list as a database record. We read it later to track the latest version uploaded.
Finally we need a way for AutoPkg to find our custom processors. The way to do that is detailed on the AutoPkg wiki here. Basically I have a folder called
PatchBotProcessors in my recipe folder containing the processors and a special recipe.
Here’s the special recipe.
Let’s have a look at the code.
The first 25 lines are housekeeping before we define our class. Then it sets up logging and the input and output variables before the first function definition. You will notice I have set up the logs to rotate daily and to keep seven, that’s because personally I run my code at a high log level and I want short logs.
Speaking of debugging you’ll notice that when we come to a grinding halt due to some sort of problem I raise a
ProcessorError. This is a function provided by AutoPkg that handles an error and places the error details into the report plist. You just pass it a string and it takes care of the rest.
upload is a single function to do all the work. I start off by calling
subprocess to upload the package file. This uses an unsupported, unofficial hack and with a lot of testing I’ve discovered that using curl is much more successful than any Python method I can find. It would be nice if Jamf gave us a way to do this via the API but don’t hold your breath, it’s been an open feature request on Jamf Nation since before Noah.
This only takes care of the file, it doesn’t save the package details, such as category, so we need a separate API call to perform this. There can be a long (in programming terms) delay between the file being uploaded and it being available for updating, you’ve probably seen this in the web GUI. Because of this the code is set up to try multiple times with a 15 second delay between attempts.
The last thing we have to do in
upload is point the test policy at our package.
Finally, we have
main which does a sanity check, calls
upload and handles the AutoPkg report details. Oh, and the little stub to allow calling the processor outside AutoPkg for testing purposes.
Let The World Know
Before we can call package uploading complete PatchBot needs to tell somebody what it has done. For this it runs a script,
Teams.py, that uses a webhook to send a message into a channel in Teams.
Autopkg provides a nice report as an XML plist so
plistlib gives us a good dictionary to parse with two main sections, one for successful build and uploads and the other for failures. Most of the script is JSON templates for the messages. The only real complication in the script is handling a totally empty run.
So we can do this on a regular basis we need some nuts and bolts to tie it all together.
Back when my programming was born we used
cron to schedule tasks but that’s been ‘deprecated’ on macOS for many years now, replaced by LaunchAgents and LaunchDaemons. So we need a LaunchAgent definition.
It gets to the right place like this
mkdir /Users/"$(whoami)"/Library/LaunchAgents/bin/ cp ./autopkg.plist /Users/"$(whoami)"/Library/LaunchAgents/autopkg.plist /bin/launchctl load /Users/"$(whoami)"/LaunchAgents/autopkg.plist
You can see it runs a shell script so let’s see what that looks like:
# run the package build /usr/local/bin/autopkg run -recipelist=/Users/"$(whoami)"/Documents/autopkg_bits/packages.txt \ --report-plist=/Users/"$(whoami)"/Documents/autopkg.plist \ -k FAIL_RECIPES_WITHOUT_TRUST_INFO=yes # messages to MS Teams /Users/"$(whoami)"/Documents/autopkg_bits/Teams.py \ /Users/"$(whoami)"/Documents/autopkg.plist
Notice we use AutoPkg’s ability to read the recipes we want to run from a list. So neither the LaunchAgent or script ever need to change, just the recipe list we feed AutoPkg. I really appreciate how well built AutoPkg is.
Next post I will explain the next step, moving our package into testing and the second custom processor.