Git - committing as multiple identities (personal and work email)

The problem is pretty straight forward: I have a personal email and a work email and depending on the project I’m working on, I should commit as the identity relevant to that repo. So in a personal project, the git author should be damien@personalemail.com and within a work project, the git author should be damien@workemail.com. Git config has the ability to include other config files based on the directory the repo is in outline in the git-config Includes. Let’s assume our ~/.gitconfig file looks something like this: ...

July 11, 2021 · 1 min · Damien Pontifex

Getting started with PowerShell and Azure on macOS

PowerShell 6.0 has now become generally available and with it the ability to use PowerShell cross platform. With all of its goodness, I was most excited about Azure powershell cross platform capabilities that this brings about. Firstly, the azure-cli is also a great cross platform tool, but sometimes the powershell capabilities seem to outshine this CLI. Installing PowerShell on macOS The installation instructions on the repo are pretty straight forward, but for summary: ...

2 min · Damien Pontifex

Inspecting TFRecord files and debugging TensorFlow data input

TFrecord files are TensorFlow’s suggested data format, although they are very difficult to inspect given their binary nature. Inspecting the contents of existing record files and ensuring the data in your input pipeline is as you expect is a good technique to have. Inspecting TFRecord values The first trick is reading in the tfrecord files and inspecting their values in python. As you’d expect, the TensorFlow API allows this (although a little hidden down). The small code snippet below highlights using the tf.python_io.tf_record_iterator to inspect ’examples’ in your record file. Replace the ’label’ or ’text_label’ as appropriate for your features, but it shows you can dot access into the property values ...

2 min · Damien Pontifex

Speeding up TensorFlow development and debug in terminal

For most of my development, I use Jupyter notebooks which are fantastic for iterative development, but running in managed environments such as Google’s ML Engine require python scripts. You can obviously run these locally with python in the terminal or your IDE, but the loop from debug, terminate, change and re-run is rather slow (from what I can tell due to import speed of tensorflow and other imported packages). I wanted to be able to keep these imports in memory (like Jupyter) and just re-run a single function during development. This is my workflow/setup for such a process. ...

2 min · Damien Pontifex