Living Inside The Updated Windows Subsystem For Linux

This weekend I finally caved in and nuked the installation on my work laptop, which had been degrading a fair bit after moving to and fro between stable and Insider builds, plus assorted hacks I did prior to the Creators Update (which didn’t improve things on top of those). Considering I held off doing a full nuke & pave since (which is when I got the machine), I’d say that the install on it had a good run.

I still don’t like the machine (it’s a Lenovo X1 Carbon, which I find too large and unwieldy for someone who’s constantly moving about), but re-imaging was painless: I reinstalled from the 10 Enterprise ISO, pointed the new install at Azure AD, and pretty much all my settings, data and applications were preserved, so I did it twice – the second time around without preserving apps or data, because I’d rather start from a blank slate and re-sync my files from OneDrive.

Considering that the end result is a fully compliant machine that knows how to talk to all the corporate resources (including all the policies and personal client certificates) and that it is completely self-provisioned (I did the same with a VM a few weeks ago as a trial run, and there is nothing I can’t access with either machine) it’s quite interesting to realize that most organizations where I came across centralized desktop management are doing it all wrong.

I spent the past decade and a half studiously avoiding using machines managed by fully paranoid lets-restrict-every-single-option IT admins that would have kittens if any user dared self-provision and configure their own laptop, and being able to do 99% of what I needed simply by logging to Azure AD and then installing Office all by myself would certainly cause them to have an apoplectic fit.

More to the point, they would deny that it’s even technically possible, even though I’ve done it three times now (that test VM plus re-imaging my laptop twice). And yet it moves.

That said, this is not about – it’s about how I work inside it, and how the is finally good enough for me to use whenever I’m away from my Mac.

My Use Case

I’m what born-and-bred folk call a hardcore terminal user. Besides spending a lot of time in tmux on remote systems, I actually use tmux locally, because it’s way faster to switch between multiple splits in a full screen terminal than multiple terminal windows or tabs. I don’t bother with pre-configured workspaces or similar things (partly because you will never be able to customize everything everywhere), and the only thing I have grudgingly customized is vim, which is my default editor for everything and which, together with tmux, serves me very well as a minimalist IDE.

But what I really need is a proper SSH environment – with a proper SSH binary that understands about ssh-agent, keychain, agent forwarding, can use standard keys and config files and, most importantly, can run git with key-based authentication without forcing me to unlock my key every time. git for is a kludge (even though there is an ongoing effort to build a decent credential manager, it’s still not as useful as SSH), and I need the real deal to get work done without friction.

Stuff I’m Using Under

Without further ado, here’s a rundown of what I install and how I get it to do what I need.

Aside from SSH and vim, I need a decent terminal. That is one of the most critical aspects to my workflow, which relies on the , and runtimes (sadly, NodeJS as well), plus a few tweaks to get it to play nicely with other tools.

wsltty as console replacement

This is a hotly debated topic in some circles (and the team has spent a lot of time pushing improvements to the standard console), but my muscle memory prevents me from using anything but a UNIX terminal since I find the way cmd and PowerShell work abhorrent – cutting and pasting is still broken for me, scrollback just doesn’t work like I expect it to, and the Windows console overall just doesn’t deliver the kind of user experience I need. On top of that, I can’t get rid of the scrollbar or customize anything else (colors, font, key bindings, mouse handling) to my liking.

So the first thing I do after installing the is install wsltty and fish out my own pre-configured shortcut to it from OneDrive.

wsltty is essentially mintty from modified to talk to , and with a few tweaks it works almost exactly like lxterminal (which is what I usually rely upon on Linux).

Since I don’t much care for the monospaced fonts that come with Windows, this time around I decided to try out Fira Code instead of copying across Andale Mono and friends from macOS. Not sure if it’s a keeper, but it’s very readable.

Base Packages

I have a Makefile for setting up and updating the whole thing (hacked from the Ansible script I use to provision remote boxes to my liking), and after setting up a minimally viable set of packages (vim vim-python-jedi tmux htop curl wget keychain python-pip libssl-dev), I copy across my SSH keys and my vim/tmux configs, setting permissions and updating my vim plugin bundles.

Python and Machine Learning

I rely on Continuum’s Anaconda for most of my work these days, but I prefer to install it via pyenv so that I can easily switch to other versions. The notable bit is that everything works (albeit slowly – more on that in a bit), so you can run exactly the same binaries you’d put up on a server.

I then add dependencies like Keras, Tensorflow, etc. as work warrants.

, and NodeJS

I take a pragmatic approach at installing these, going for original versions wherever possible (except for NodeJS, which I’ve long installed via Nodesource).

Again, everything seems to work (I’ve even built a few ARM binaries with ), although I’ve had crash once already while building a application – there are some known issues with it, so I’m not fretting.

The Azure CLI

One of the critical things I need for working is the Azure CLI, which under I install under the system . You definitely want to upgrade pip and enable completion, though:

sudo pip install --upgrade pip pycparser azure-cli
sudo ln -s /usr/local/bin/az.completion.sh /etc/bash_completion.d/az.completion.sh

Integrating with Visual Studio Code and for Windows

Although it’s currently impossible to use the git directly from Visual Studio Code, a good enough compromise is to configure bash as the integrated shell – that way I can use the GUI for all local operations, and fetch/push using the terminal.

To achieve that, I tweak my settings.json like so:

{
    "editor.fontFamily": "Fira Code",
    "editor.fontLigatures": true,
    "terminal.integrated.shell.windows": "C:\\Windows\\sysnative\\bash.exe",
    "terminal.integrated.shellArgs.windows": [
        "-l"
    ],
    "git.confirmSync": false
}

is much easier – as it happens, for listens on localhost:2375, so the only real issue is making sure the docker command inside matches what is running on the Moby VM inside Hyper-V and export DOCKER_HOST=localhost:2375.

Installing the latest CE version inside works fine by following the Ubuntu 16.04 instructions, and I can then talk to the daemon and build containers just fine, as long as I stay completely inside (knowing how works with “remote” daemons, I suspect I’ll eventually come up against some glitches, but I’ve been lucky so far).

As an added datapoint, connecting to a Kubernetes cluster via kubectl just worked, too.

Oddities

So far, the overall experience is vastly better than using , but I keep running against minor aggravations that mar the experience. For instance, anything outside the filesystem (i.e., my OneDrive) is mapped to UID root, which means many utilities refuse to write there and it’s very hard to have a consistent developer workflow that spans both worlds – I either have to clone my repos inside and forego using Visual Studio Code, or resort to all sorts of crufty hacks.

Update: this has changed with the final release of WSL, to the extent where I can work perfectly well inside my OneDrive folders by just symlinking ~/OneDrive to it–which has the added benefit of affording me perfect sync with the Mac.

Networking is still clearly a work in progress. For instance, name resolution is all over the place. Sometimes I can SSH into machines on my LAN, sometimes I can’t resolve them inside WSL. I have no clue as to why or how resolv.conf gets updated, but every time it happens, it’s pretty much wrong. I suspect some unforeseen interaction between our corporate VPN setup and (which runs in Hyper-V) might be the cause.

Update: To my utter amazement, I’ve of late been able to directly flash microcontrollers from inside WSL through /dev/ttyUSB0, which is automatically mapped to my FTDI adapter–from both the Arduino CLI tools and esptool.

Working Around the Slowness

Part of the reason why is slow in some instances appears to be the interaction with Defender, which goes around sniffing at all the binaries and filesystem changes and takes up a good chunk of CPU.

What I did was add an exclusion for the entire subtree by going to Virus & threat protection > Virus & threat protection settings > Add or remove exclusions and adding %LOCALAPPDATA%\Local\lxss (which expands to C:\Users\<Username>\AppData\Local\lxss).

Update: After upgrading to the Windows 10 Fall Creators release, you can install several different distributions, each of which will reside under your AppData\Local folder. Ubuntu, for instance, is at C:\Users\<Username>\AppData\Local\Packages\CanonicalGroupLimited.UbuntuonWindows_79rhkp1fndgsc\LocalState on my new install, which is far less straightforward.

This carries with it a few risks, but the trade-offs are more than worth it – I can now launch the Azure CLI nearly instantly, whereas previously it would stutter along as it loaded all its internal modules.

For good measure, I also added a couple of “interesting” process names, like java, python and node, although this feels a bit like cargo culting since I’m not really sure Defender is seeing the exact same process names we see in Task Manager – I’ll be investigating this.

Conclusion

Considering that I still spend as much time as possible using a Mac and regular Linux, the feels a lot better with the Creators Update, and having spent a few solid hours rebuilding a few of my apps on it I think it’s finally ready for daily usage – although I’m still sad that I can’t have better intro with Visual Studio Code, especially as far as git is concerned.

I will be keeping a close eye on updates to , although it will take some major changes for me to go back to Insider builds (which so far seems to be the only way to get regular updates). Unless there is some egregious bug I haven’t come across yet, that seems highly unlikely.

Maybe that will change, but for now I’m going to stick to what I have and try to get some serious work done inside it instead of just fooling around – which, I think, was the entire point of building it in the first place.

This page is referenced in: