Using TensorFlow Feature Columns in your Custom Estimator Model

The TensorFlow canned estimators got promoted to core in version 1.3 to make training and evaluation of machine learning models very easy. This API allows you to describe your input data (categorical, numeric, embedding etc) through the use of feature columns. The estimator API also allows you to write a custom model for your unique job, and the feature columns capabilities can be utilised here as well to simplify or enhance things. In a custom estimator, you are required to write a model_fn and it’s here you utilise your features from your input_fn with your model architecture. To showcase using the tf.feature_column API, lets compare an embedding column with and without this module. Without the feature column API, a typical embedding of your features can be setup as so:

VOCAB_SIZE = 5000
EMBEDDING_DIMENSION = 50
def model_fn(features, labels, mode, params):
    with tf.name_scope('embedding'):
        embedding_weights = tf.get_variable(
            name='embedding_weights',
            shape=(VOCAB_SIZE, EMBEDDING_DIMENSION))
        embedding = tf.nn.embedding_lookup(
            embedding_weights,
            features['word_indices'])

Using the feature column API we setup the feature columns matching the keys from features and then use tf.feature_column.input_layer to process the features and create a flat input column for our model. To replicate the embedding above, this becomes:

def model_fn(features, labels, mode, params):
    word_indices_column = tf.feature_column.categorical_column_with_identity(
        key='word_indices',
        num_buckets=VOCAB_SIZE,
        default_value=0)
    embedding = tf.feature_column.embedding_column(
        word_indices_column,
        dimension=EMBEDDING_DIMENSION)

    # Construct a flattened input layer to pass to use in the model first layer
    input_layer = tf.feature_column.input_layer(
        features,
        feature_columns=[embedding])

In both of these cases, an initializer could be provided if you wanted Glove weights and trainable can be set to False to specifically use these. The amount of code is reasonably similar for this scenario, but the descriptiveness of the feature column approach makes it easier to understand. Adopting this approach though gives you access to many other feature transforms that might not be as straight forward such as crosses, categorical, bucketing etc. The other advantage of utilising the feature column API comes if you choose to utilise TensorFlow Serving or you’re deserializing TFRecords. The tf.feature_column.make_parse_example_spec function simplifies the feature spec which can be utilised by tf.parse_exampleor can generate the serving input function as:

feature_spec = tf.feature_column.make_parse_example_spec(feature_columns)
export_input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(feature_spec)
estimator.export_savedmodel('exports', export_input_fn)

Conclusion

As you can see the TensorFlow feature columns module provides a simplified approach to utilising features from your data and processing them for use in your custom model. You could also follow the style of the canned estimators and provide a custom model architecture that could utilise the same feature columns programming model for consistency.

Built with Hugo
Theme Stack designed by Jimmy