Notebook Template

#Notebook-Template

Here we're outlining a sample notebook template so you can get an idea of what to include in your own notebook submission. We've included an example blurb for each section in italics as well, taken from the Show and Tell model. Feel free to swap out the example text with your own.

Create Natural Captions for Images

#Create-Natural-Captions-for-Images

The first part of your notebook should be a brief description that hits on some important points. You should include a brief blurb about what your model does (its inputs and outputs), what it was trained on, etc. Try to avoid ML-specific jargon to keep it understandable.

im2txt is a ML model that can take in images, and generate human-like captions describing the scene. Given an image, it'll output a string which will be a human-like description of what the model "sees" in the image. The model has been trained on 300k+ images. (More on performance below.)

Prediction Examples:

#Prediction-Examples:

You can supplement the description with a brief example of inputs and outputs for user reference.

a group of people riding bikes down a dirt roada group of people walking down a street

Keep reading for a live tutorial!

#Keep-reading-for-a-live-tutorial!

You can also throw in a message directing them to check out your tutorial, which should follow this description

How Good Is This Model?

#How-Good-Is-This-Model?

This is where you should mention the strengths of the model and what sets it apart from othe models. Try to explain benchmark metrics in intuitive ways rather than technical.

The performance published in the original paper shows results comparable to human performance in certain accuracy/quality metrics. However, due to the relatively small dataset and types of examples available in the training (learning) dataset, the performance varies in the real world.

Misc

#Misc

Any other important details can be placed here, including specific license and framework details, or anything else that would be useful to your audience.

The code for the model/training is licensed under Apache License 2.0 and the trained model weights is licensed under MIT.

Tutorial

#Tutorial

The tutorial should be a simple, step by step implementation of downloading weights and model code, model (pre-)processing some test data and conducting some inference.

The below tutorial is an example of some of the steps we anticipate your model to have, and how we think they can be best explained and done.

Download and Unzip Model Files from ModelDepot.io

#Download-and-Unzip-Model-Files-from-ModelDepot.io

Download Model Code From TF Repo

#Download-Model-Code-From-TF-Repo

Install Dependencies

#Install-Dependencies

Feel free to skip if you already have these dependencies installed

Set Model Files

#Set-Model-Files

TODO: Change these values if you're downloading the model files somewhere else.

#TODO:-Change-these-values-if-you're-downloading-the-model-files-somewhere-else.

Download and Visualize Some Sample Images

#Download-and-Visualize-Some-Sample-Images
Loading output library...

Load Our Dependencies and Model

#Load-Our-Dependencies-and-Model

Create Some Captions for our Images!

#Create-Some-Captions-for-our-Images!
Loading output library...
Loading output library...
Loading output library...
Loading output library...

References

#References

You should also include any references to papers or other relevant links at the bottom of the notebook. If this is your original model, let others know how to properly cite you for your work!

Show and Tell: A Neural Image Caption Generator

#Show-and-Tell:-A-Neural-Image-Caption-Generator

"Show and Tell: Lessons learned from the 2015 MSCOCO Image Captioning Challenge."

Oriol Vinyals, Alexander Toshev, Samy Bengio, Dumitru Erhan.

IEEE transactions on pattern analysis and machine intelligence (2016).

Full text available at: http://arxiv.org/abs/1609.06647

Code

#Code

https://github.com/tensorflow/models/tree/master/research/im2txt

Pretrained Weights

#Pretrained-Weights

https://github.com/KranthiGV/Pretrained-Show-and-Tell-model