I just released Scriptorium, a small console program. Here are some notes on how I used argparse to do that.
We need a function to parse our arguments. Parsing is taking the line of words from the command line and processing them to extract the structure and meaning. The term ‘word’ can be fraught with complexity on the shell command line but a simple definition is any set of characters delimited by a space or matching quotes.
Scriptorium has a simple structure, scriptorium <command> [<argument>]. Some commands don’t have any arguments, for some arguments are entirely optional and for others arguments are required. But let’s start building that parser.
You can also see we are starting to build some help in ‘epilog’ – this is the final line printed when you run scriptorium --help. Then we need to have some code to parse each individual commands arguments. This is a subparser.
A little while ago I became extremely annoyed by a bunch of scripts in my Jamf Pro instance that weren’t properly named and didn’t have proper author comments.
What’s an easy way to rename a dozen scripts and edit twenty? There really isn’t one so instead of doing it by hand I decided I wanted a software system that made it easy. Yes, I know, spending a chunk of your spare time writing 750 lines of Python (it was actually 950 till I did some serious refactoring) rather than a day doing it manually might seem a little, well, silly. I know, I have a problem, I’m working on it.
Scriptorium is the result. A Python script that uses a combination of two directories and two git repositories to provide versioning, tracking, and backup while adding an easier to use interface for editing the scripts.
I’d never built a script with an extensive command line interface. Python’s argparse library makes it incredibly easy. Not only does it allow you to quickly put together the commands and options in the process it builds your help system and quickly structures your code with each command requiring it’s own function.
As Scriptorium leverages git adding a git module to zsh and the git lens extension to Visual Studio Code adds even further benefits.
I’ve tried to make the README file as comprehensive as possible so go give that a read and grab a copy on Github
You might also find me speaking about it at JNUC2021.
Since the last time I wrote about PatchBot I’ve made a few improvements to the Production processor.
Moved decision logic from Move.py into the processor.
Added the recipe variable delta to specify days between test and production.
Added the recipe variable deadline to set the Self Service deadline.
Added defaults for delta and deadline to the top of Production.py to ease customization
Both the recipe variables are optional. It’s now possible to use the -k option of Autopkg to quickly move a package from test into production with a short Self Service deadline if you need to:
autopkg run Firefox.prod -k "delta=-1" -k "deadline=2"
The above will immediately move the Firefox package into the production patches and set a deadline of two days. We set a delta of -1 because the system used for command line arguments doesn’t allow us to tell the zero we get when there was no command line argument and the zero we get when we set it as the variable value. A value of -1 has the same effect as zero. If the difference between the date the package was put into test and today is zero then a difference greater than or equal to zero, and greater than or equal to -1, are both true.
I’ve also added a tool to the #patchbot channel in the MacAdmins Slack that posts to the channel whenever I push new code to the GitHub repository.
So the inevitable happened after I published my blog posts about PatchBot. I found some small bugs, now fixed.
But also inevitable was somebody telling me there was a better way to do something.
It turns out that since JPCImporter only needs the pkg_path variable it can be used as a post processor when calling the package build recipe. That means you don’t need to alter the .pkg recipe override at all. That’s a whole bunch of recipes we can just forget about. Thanks to Graham Pugh for the tip.
The first AutoPkg call needs to be changed:
# run the package build
/usr/local/bin/autopkg run --recipe-list=/Users/"$(whoami)"/Documents/PatchBotTools/packages.txt \
--post com.honestpuck.PatchBot/JPCImporter \
There were also some bug fixes to Move.py and ProdTeams.py.
In the first post I gave a short summary of how the system works and introduced JPCImporter, the first AutoPkg custom processor.
In the second post I introduced patch management and the second custom processor.
In this post we will look at the python script that decides when to move a package into production and the custom processor that does all the work.
Move.py is fairly simple. When you strip off the first 50 lines as housekeeping you’re left with a function loop that does all the work. It loops through every patch policy on the server then if it’s an enabled test patch policy looks for a date more than 6 days ago in it’s self service description.
It uses that to build a command that gets used in a subprocess call. I keep on looking at this and thinking it would be nice to just call the right function in AutoPkg and pass it the list but that’s probably even more fragile than the current way.
You will also see that after AutoPkg it calls a final script to send more messages to Teams. This could have been done in Move.py but I liked having the code separate during development and it makes it easier for you to use my code.
Notice that we don’t do anything in Move.py, for that we rely on our final custom processor, Production.py.
Last post I detailed the first steps taken by PatchBot, building and uploading a new version of an application package.
This post I will explain the next step, updating the testing patch policy.
First thing I should explain is why we don’t do this when we build and upload the package. It boils down to the reliability of our patch definition feed. If every time a new version was available the patch definition feed was updated at exactly the same time we could have done it all in JPCImporter. Unfortunately the patch definitions are only updated every 12 hours (I think) and that’s enough of a window for Murphy. Kinobi keep on decreasing the window but no matter how narrow you know Murphy will have his way, so defensive design and coding.
A little over a year ago I set out to build a system that would deliver application patches to my users without me doing a thing.
I have leveraged AutoPkg, the JAMF patch management system, and API to build a total solution where almost all of my applications are automatically patched on my fleet without me touching a thing.
I call it PatchBot. This will be a series of four blog posts explaining the system and how to get it working. All the code and tools are published on Github.
When a new version of an application is available a package is built and a patch is sent to a test group, after a set delay the package is moved into production where everyone with the app installed gets a patch to update and our application install policy is updated.
Two LaunchAgents automatically run AutoPkg with some custom processors and scripts to perform all the work.
While it does take some setting up for each application the process requires manual intervention only to stop an application patch package going in to production when a problem is found in a package or to speed it up if you need to deploy a security patch quickly.
Patch levels across the fleet have improved dramatically.