TensorFlow is an end-to-end open source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries and community resources that lets researchers push the state-of-the-art in ML and developers easily build and deploy ML powered applications.
This document will cover how to install TensorFlow (TF) by using Anaconda and run a toy example in Python. Make sure review Anaconda to learn how to create and maintain Conda virtual environments.
We have two options for using TensorFlow (TF), one is using CPUs for the computational tasks and the other one is using GPU resources. Depends on your workflow and identity of your calculations, you might choose any of these methods.
To use TF with CPU resources, use the following to create the environment (env) and install TF:
And use the following to install TF for GPUs:
cudnn in addition to TF. Also in a HPC system, you need to request resources and load
miniconda3 module first.
After creating the preferred env, we can use Python to run TF. The following is a simple example from TensorFlow tutorials. This example uses the MNIST database and Keras to run a pattern recognition training task on images of handwritten digits.
Create a Python file called
minist.py including the following scripts (from here):
import tensorflow as tf mnist = tf.keras.datasets.mnist (x_train, y_train),(x_test, y_test) = mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(input_shape=(28, 28)), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(10, activation='softmax') ]) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) model.fit(x_train, y_train, epochs=5) model.evaluate(x_test, y_test)
To run the above code with CPUs, we can use:
Also, if you have created
tensorflow-gpu-env environment, you can use the following to run this job with a GPU.
Note that in an HPC system with Slum scheduler, we can use a sbatch file, called
tf-gpu.sh, such that:
To submit the job to a GPU partition by running
sbatch tf-gpu.sh. Slurm out file will show the outputs.