Now We Have bash Completion For Munki

I’m on a roll. I’ve written the bash completions for Munki.

(tl;dr The completions are on Github )

It’s getting easier to write them. There was one little trick I used that I didn’t
mention in my last post that I thought I’d share. How to use find and replace with
regular expressions to generate some of your code.

For this I use Find... in BBEdit. I started with a list of the commands, one on each
line. Continue reading


bash completion for autopkg

Over the weekend I was feeling a little bored so I decided to try my hand at writing a shell script to add custom completion for autopkg to bash.

(tl;dr – the script is on GitHub.)

I found an example for the zsh shell which lacked a couple of features and I spent some time examining the script for brew so I wasn’t totally in the dark.

There are a number of tutorials available for writing them but none are particularly detailed so that wasn’t much help.

Writing Shell Scripts

The first thing I should say is that I find writing shell scripts totally different to writing for any other language. I probably write shell scripts incredibly old school, shell and C were the two languages I was paid to write way back in the 1980’s. It feels like coming home.

Continue reading

Containers Rock! Why I’m A Docker Fan

Docker for the Macintosh has recently emerged from beta and I’m ecstatic.

Docker implements a way of walling off a piece of software from the underlying operating system using a tech they call “containers”.

This is an absolute godsend for deploying services. One of the problems in system administration is the cost and complexity of spinning up a new service and then removing it from a computer once it is no longer required.

Software when it is installed and run can spray pieces of itself all over the computer’s file system and getting it out again is difficult.

Previously we have used virtual machines to isolate this problem. That has it’s own costs, a virtual machine means you are running (at least) two complete operating systems on the hardware. It also has a cost in memory and hard disk space.

Containers lower the cost considerably. They have all the advantages of virtual machines but share the operating system kernel with each other and the underlying OS. This makes them smaller and consuming considerably less resources than virtual machines. This also makes them quicker to download and deploy.

Since Docker is open source it means that there is now a huge community around it. Docker containers are easily available for a huge range of applications, a quick visit to Dockerhub will show you how large.

Docker containers may well be the holy grail of app deployment. They certainly tick all the boxes system administrators require.

Using Docker

So how easy is it to use? Installing it is trivial, just download the install package and copy the Docker application to your Applications folder. You might also want to download
Kitematic which provides a GUI interface to Docker, it also just requires downloading and copying the app to your Applications folder. It is just as easily installed on a Linux box.

You can also install bash completion for docker with this

curl -XGET > brew --prefix/etc/bash_completion.d/docker

I wish I could tell you how easy it is to build a Docker container from scratch but every time I searched DockerHub for a container I wanted someone else had already built it, or built a large chunk of it.

As an example, I wanted a container running Python 3, Jupyter and the add-on for bash notebooks. Sure, I could have built it from scratch but Continuum, the Anaconda people, already have a Docker container with Python 3 and Jupyter (along with a bunch of other useful Python libraries) installed so :-

docker run -it continuumio/anaconda3 /bin/bash

which will download and run the Python 3 version of Anaconda in a container. Then when the container runs (the -it makes it an interactive container) :-

pip install bash_kernel
python -m bash_kernel.install

then exit the container and at the terminal prompt

docker ps -a
docker commit <container_name> tonyw/jupyter

The ps -a lists all the containers so I know which one to commit and the commit saves the changed container with (optionally) a new name. Now we can run the new container.

docker -d -p 8888:8888 -v /Users:/Users -rm tonyw/jupyter 
 jupyter notebook --ip='*' --port=8888 
 --notebook-dir /Users/tonyw/dev/Notebooks 

This runs the Docker container in ‘daemon’ mode and when the container starts runs the command at the end, in this case Jupyter in notebook mode.

Of course if I just want to run Python 3.5 instead of Jupyter I can always replace the -d with -it and the jupyter command with bash and I get a shell in the container.

Docker Magic

Now all the Docker gurus out there are screaming at me that I should use a Dockerfile to build my custom container and define all sorts of magical stuff like the default command to run when the container starts and the working directory and all the rest so I didn’t need them all in my long command line. Frankly, while that would probably be a good idea I haven’t quite managed to learn how to do all that automated magic and it almost seems like too much work.

Perhaps for my next blog post.

Further Reading

Macadmins Dockerhub
Pepijn Bruienne’s talk on Docker from PSU MacAdmins 2015

BBEdit Really Doesn’t Suck

Recently, with version 11, BBEdit introduced a demo mode so I thought to take another look at the big brother of TextWrangler. I have to say BareBones Software’s tag line for BBEdit is true “BBEdit – It doesn’t suck!”.

There are two tasks that I use an editor for, writing Python and writing Markdown so those are the two that I looked at.

There are a number of things you can do to improve BBEdit as a Python IDE. The first is to install Dash. This is a brilliant tool for searching documentation sets and can be easily searched from BBEdit. Just select a library call and choose “Find In Reference…” under the Search menu and BBEdit will pass the search to Dash. Dash will search across all your documentation sets but it is easy to set the sort order so the Python entries are close to the top and in the Dash results window there is a little Python icon next to the Python results.

The other neat item under the Search menu is “Find Definition”, this will find where in your file a function is defined – useful if you have a long source file.

But how does that work if our project is in multiple source files? Well, Unix has long known of that problem and had a solution. It’s a tags file, first used in vi. This is a file that lists all the function definitions and variables used in all the files in a directory tree. Not only can BBEdit use a tags file but it can (using the open source utility cats) generate them. At the top of your project directory tree, on the command line bbedit --maketags will generate a tags file and now “Find Definition” will work across all the Python files in the tree.

BBEdit can also run a syntax check across your source. You will find “Check Syntax under the ”#!“ menu which also allows you to run your Python code. The final entry in this menu ”Show Module Documentation” displays a new text window with the output from running pydoc across your file. I love this, it encourages me to properly document my code as I write with pydoc strings for each function. The output is extremely useful as a memory aide for large programs and modules.

Next up is running a lint across our Python source. BBEdit comes with another command line tool, bbresults which turns formatted error output from Unix command-line tools into a BBEdit results windows. This is an exceptionally neat trick. At the command line flake8 | bbresults will give you a window in BBEdit with each of the errors and warnings listed and a click on one will take you to the exact spot in your source. If you don’t have flake8 installed then you can install it with conda or pip.

By the way, this works because the bbedit and bbresults command line tools understand the +n argument syntax for going to line n in a file. Sublime Text and other editors on the Mac could learn this.

A final tip for programmers, BBEdit recommends setting the $EDITOR shell variable to bbedit -w where the -w flag has the bbedit command line tool wait till you close the window before exiting. If you add the --resume flag as well then when you close the window in BBEdit it will return the Terminal to the front. Exceptionally handy.


One complaint I would make, and I make it about a number of editors, is that the Markdown syntax highlighting is on the stupid side. This is generally due to the flaws in using nothing but regular expressions to do the highlighting. The most obvious flaw is that underlines in such things as a URL will trigger highlighting for italics.

If you want you can “lint” your prose using proselint and bbresults. Personally I find proselint rarely throws up something I actually want to change but your mileage might vary, it’s a good tool for looking at prose text.

BBEdit has no special facilities for writing Markdown such as inserting the codes for text styles or formatting but it does have the ability to use “Clippings”, a short piece of text, and clippings can be kept in sets and a clipping can have a keyboard shortcut. I don’t use it, I have a few Keyboard Maestro macros for such things as web links and otherwise just type the few extra keystrokes.

BBEdit also has “Text Filters”, which allow you to run the current selection through a script. For Markdown I have one that turns tab separated text into a Markdown table, incredibly useful for tables copied from a spreadsheet. Not sure where I got it but I suspect it was from Brett Terpstra’s blog.

BBEdit is a good editor, well worth the $50 purchase price and has a number of advantages over it’s free little brother TextWrangler. As both a general purpose editor and an editor for programming I’d have to say that it is the best editor available on the Mac at the moment though Sublime Text comes close.

Jupyter Releasing Some Nice Software

The Jupyter group have released an alpha version of a new Notebook environment called JupyterLab

JupyterLab is browser based, just like the old notebook system but adds a multiple pane environment. I’m not going to go into the details of the collaboration between the large number of organisations that have gone in to the development, go read the blog post announcing JupyterLab. Suffice to say that I’m glad such a high powered group are working on my favourite Python environment.

I installed the alpha (it’s quickly done with pip) and had a look. It’s an exciting looking development and will make a brilliant Python development environment.

At the moment it seems to be suffering from minor speed problems and minor layout problems in Safari (they are minor, don’t appear in Google Chrome and Safari is not currently listed as a supported browser so I’m not going to complain too loud.)

The built in editor can syntax colour Python. It even has colour themes for those, like me, who like a particular look in their editor. At the moment it is indenting only two characters with a tab (PEP 8 says it should be 4) and if you hit return with the cursor in column 1 then you get a first level indent on the next line.

These are the sort of problems you an expect in alpha software. I think I might install the current development version from Github and check there before filing a couple of bug reports. I’m a bit idiosyncratic, nothing I like more than spending an hour or two getting a bug down to it’s essentials and filing a report.

IPython 5

They have also released a new version of IPython they are calling IPython 5.0 LTS. It has some nice new features including syntax highlighting as you type and much better multi-line support. This is due to shifting from various command line interfaces to the purely Python readline replacement prompt_toolkit.

I think the move to prompt_toolkit is going to show major dividends as the library (currently at version 1.0.3) adds yet more functionality and that functionality moves into IPython. Jonathon Slenders, the author of the library, is also developing clones of Vim and tmux in pure Python using it and intends to fold features from those projects back in to prompt_toolkit.

They are designating this as “Long Term Support” as it will be the last IPython to run under Python 2. IPython 6 will require Python 3. Not is all lost though, they say they will continue to support Python 2 kernels with Jupyter Notebooks (and we assume the new Jupyter Lab). As they say in their announcement “For the 5.x series releases we are making an exception to that rule: until the end of 2017 the core team will do its best to provide fixes for critical bugs in the 5.x release series. Beyond that, we will deprioritise this work, but we will continue to accept pull requests from the community to fix bugs through 2018 and 2019, and make releases when necessary.” So it will be a while before us OS X users are forced to run Python 3 for IPython and break PyObjC and it’s brethren which are written in 2.7 (we can also hope that well before the 20202 deadline Apple moves to Python 3 and does the port of PyObjC.)

Easy Python Development

Taken together these two new releases improve Python development enormously for me. I have always been a fan of iterative development of my code in IPython and this just makes the explore and iterate method easier and easier.

The “Next” Human-Computer Interface

Earlier today I read a piece in The Atlantic entitled The Quest For the Next Human-Computer Interface, subtitled “What will come after the touch screen?”.

I’ve been interested in human-computer interfaces since the very early Eighties when I first came across the work of Niklaus Wirth, Seymour Papert and Jef Raskin. For me human-computer interfaces are split in two. The first is the interface to _build_ software and the second is to _control_ software. Wirth worked mainly on the former, Raskin on the latter and Papert in both areas, principally from work in learning.

The Atlantic article is, of course, mainly concerned with the latter. How do people control the software on their computing device, how do they enter data and how do they get results.

It also starts from a broken premise, that there will be a “next” interface. Next implies there was a previous interface and that it has now been replaced. This couldn’t be further from the truth. It was only the most primitive of computers that predated the use of a keyboard and printer, two interfaces still going strong more than sixty years later. Speech recognition was usable for serious work as far back as the early 1980’s. Touch screens date from the same time. Virtual reality and augmented reality work, including work on using gestures, also began around then.

Let’s have a look at my favourite interface, the keyboard. You might think that not much has changed but just think about spelling correction and predictive text. If you’re a programmer using a good editor then you can even have fairly good (and improving) context sensitive predictive text – the editor knows when you are typing a variable name and only predicts those one moment then on the next line realises you are calling a function and predicts on those. How about an editor that “knows” when you import a bunch of functions and adds those to the list to predict on?

Even better, in Google Wave Peter Norvik demonstrated context sensitive spelling correction. His example was the system capable of correcting “icland is an icland” to “Iceland is an island”. He also demonstrated the system correcting a number of homonyms such as “Are they’re parents going two the coast?” corrected to “Are their parents going to the coast?”

So while the physical keyboard has not improved (indeed keyboard junkies like me feel it has gone backwards) the intelligence of the keyboard has improved and improved the interface.

How about that voice technology?

First, let’s dismiss one of the statement’s in the Atlantic article. Missy Cummings (head of Duke University’s Robotics Lab) says “Of course, the problem with that is voice-recognition systems are still not good enough. I’m not sure voice recognition systems ever will get to the place where they’re going to recognize context. And context is the art of conversation.”

I’m going to break that down. Voice-recognition is actually two problems. The first is translating the noise of a voice into a text stream. The second is understanding the text stream so that our software can act upon the request. In good systems the second informs the first, but they are different problems. So when Cummings talks about recognizing context she is talking about the second problem.

For all intents and purposes the first problem has been solved. Translating the noise of your voice to a text stream is becoming more reliable, less upset by your accent and faster by the day. Siri, for example, does this superbly.

So it is the second problem where improvements still occur. This is the field of study called “natural language processing”. The problem Cummings is talking about is partly discourse analysis, text linguistics and topic segmentation. All of these sub-fields have continued to progress. Indeed progress has been amazing for natural language processing within what researchers call “limited domains”. This is where the general topic of a conversation (or discourse) is limited to a specific area.

An example might be a search of a movie database.

“Show me all Cameron Diaz’s movies.”

“I’ve got 32 movies.”

“OK, how about just her comedies?”

“Here are the six movies starring Cameron Diaz marked as comedies.”

That is a conversation which uses context. A tiny example but the computer has to understand the meaning of “her” from the context of the conversation. The next time you talk “her” might be Judi Dench or Cate Blanchett. Now this is limited in domain and the context is easy but it *is* recognizing context. So research continues on understanding more complex examples of context and across a wider domain. Siri, the Amazon Echo and their ilk are improving constantly.

We have also seen constant improvements in touch interfaces. Both the hardware, with touch sensitive capacitive touch screens with excellent resolution replacing earlier capacitive screens, and interface software where tap, tap and hold, hard tap and hold and swipe all recognised with different meanings (and often different meanings in different contexts). Touch screen software is even getting good at recognising the difference between your finger or a pen and your hand accidently brushing the screen.

So what will the next human-computer interface be? Mostly the old ones with improved software, hardware and interface design.

X World 2016 Was A Great Conference

So last Thursday and Friday was the AUC‘s annual conference for Macintosh system administrators, X World.

Held at the University of Technology it is a combination of workshops, presentations and social events.

This year it started with pre-conference drinks organise by the Sydney Mac Admins group. We meet once a month or so and made sure our July meeting coincided with the start of the conference.

The first keynote was from Rich Trouton on OS X security. Then the first afternoon saw other presentations. I had to miss them as I was giving my workshop “Bash For Beginners”. If you want the slides and other materials from the workshop then they are in my github here.

The rest of the conference was equally good with a dinner on Thursday night, more presentations on the Friday and time to meet and gossip with many other Macintosh administrators.

If you are a Mac administrator in Australia or New Zealand then I recommend you start your planning to attend next year’s conference. It is the best place to learn and meet others that you will find. The AUC has a YouTube channel  where you can check out presentations from previous years as well as their other conferences.