TensorFlow Serving our retrained image classifier

Here we’ll look at exporting our previously trained dog and cat classifier and call that with local or remote files to test it out. To do this, I’ll use TensorFlow Serving in a docker container and use a python client to call to the remote host. Update 12th June, 2018: I used the gRPC interface here, but TensorFlow serving now has a REST API that could be beneficial or of more interest ...

5 min · Damien Pontifex

TensorFlow Serving with a variable batch size

TensorFlow serving can handle a variable batch size when doing predictions. I never understood how to configure this and also the shape of the results returned. Finally figuring this out, here’s the changes to our previous serving setup to accept a variable number of images to classify for our model. Serving input function First thing is to update our serving input receiver function placeholder. In the past we had set the placeholder to have a shape of [1], for variable batch size, this is as easy as setting it to [None]. Our updated input receiver function is now: ...

3 min · Damien Pontifex

TensorFlow serving with canned Estimator

I’ve had a few attempts at getting TensorFlow estimators into a serving host and a client I can use to query them. Finally getting it working, I thought I’d write up the steps for reproduction. Assumptions and Prerequisites The first assumption is that you have already trained your estimator (say the tf.estimator.DNNRegressor) and this is now in the variable estimator. You also have a list of feature columns as is standard in a variable feature_columns. The example here is basically the same as my trained estimator. You will also have the tensorflow serving source code locally. If not, you can grab it and all the submodules by running git clone --recursive https://github.com/tensorflow/serving.git --depth=1. n.b. you will need the path to this later (mine is at ~/developer/serving). Finally, you will also need to have the gRPC tools installed and can be installed by running pip3 install grpcio-tools ...

4 min · Damien Pontifex

Understanding TensorFlow's rnn inputs, outputs and shapes

Understanding the shape of your model is sometimes non-trivial when it comes to machine learning. Look at convolutional neural nets with the number of filters, padding, kernel sizes etc and it’s quickly evident why understanding what shapes your inputs and outputs are will keep you sane and reduce the time spent digging into strange errors. TensorFlow’s RNN API exposed me to similar frustrations and misunderstandings about what I was expected to give it and what I was getting in return. Extracting these operations out helped me get a simple view of the RNN API and hopefully reduce some headaches in the future. In this post, I’ll outline my findings with a few examples and. Firstly the input data shape: batch size is part of running any graph and you’ll get used to seeing None or ? as the first dimension of your shapes. RNN data expects each sample to have two dimensions of it’s own. This is different to understanding that images have two dimensions, RNN data expects a sequence of samples, each of which has a number of features. Lets make this clearer with an example: ...

5 min · Damien Pontifex

Using pre-trained Glove embeddings in TensorFlow

Embeddings can be used in machine learning to represent data and take advantage of reducing the dimensionality of the dataset and learning some latent factors between data points. Commonly this is used with words to say, reduce a 400,000 word vector to a 50 dimensional vector, but could equally be used to map post codes or other token encoded data. Another use case might be in recommender systems GloVe (Global Vectors for Word Representation) was developed at Stanford and more information can be found here. There are a few learnt datasets including Wikipedia, web crawl and a Twitter set, each increasing the number of words in its vocabulary with varying embedding dimensions. We will be using the smallest Wikipedia dataset and for this sample will pick the 50 dimensional embedding. ...

4 min · Damien Pontifex

Using TensorFlow Feature Columns in your Custom Estimator Model

The TensorFlow canned estimators got promoted to core in version 1.3 to make training and evaluation of machine learning models very easy. This API allows you to describe your input data (categorical, numeric, embedding etc) through the use of feature columns. The estimator API also allows you to write a custom model for your unique job, and the feature columns capabilities can be utilised here as well to simplify or enhance things. In a custom estimator, you are required to write a model_fn and it’s here you utilise your features from your input_fn with your model architecture. To showcase using the tf.feature_column API, lets compare an embedding column with and without this module. Without the feature column API, a typical embedding of your features can be setup as so: ...

2 min · Damien Pontifex