Tensorflow is a popular machine learning (ML) framework, see https://www.tensorflow.org/.

A common use case is to import Tensorflow as a module in Python. It is then up to you as a user to write your particular ML application as a Python script using the tensorflow Python module functionality.

At C3SE we provide precompiled optimized installations of both legacy and recent versions of Tensorflow in our tree of software modules, see our introduction to software modules. It is also possible to run Tensorflow using containers. For generic guid on using containers, see https://github.com/c3se/containers/blob/master/README.md and https://www.c3se.chalmers.se/documentation/applications/containers/.

In the software module tree we provide Tensorflow versions both with CUDA GPU acceleration and versions using only the CPU. Which one you want to use depends on which part of our clusters you will run your jobs on. However, Tensorflow is heavily optimized for GPU hardware so we recommend using the CUDA version and to run it on the compute nodes equipped with GPUs. How to do this is described in our guide to running jobs.

To list the available versions you can use the module spider tensorflow command:

[hebbe@vera ~]$ module spider tensorflow

To use the version TensorFlow/2.2.0-Python-3.7.4 (i.e. Tensorflow v2.2.0 with Python 3.7 bindings) we inspect that particular module with the module spider command:

[hebbe@vera ~]$ module spider TensorFlow/2.2.0-Python-3.7.4

  TensorFlow: TensorFlow/2.2.0-Python-3.7.4
      An open-source software library for Machine Intelligence

    You will need to load all module(s) on any one of the lines below before the "TensorFlow/2.2.0-Python-3.7.4" module is available to load.

      GCC/8.3.0  CUDA/10.1.243  OpenMPI/3.1.4
      GCC/8.3.0  OpenMPI/3.1.4

Here we see that the Tensorflow module depends on a number of other software modules. All these modules have to be loaded before loading the TensorFlow/2.2.0-Python-3.7.4 module.

If you want to run on CUDA accelerated GPU hardware, make sure to select the set of modules including the CUDA/10.1.243 package.

[hebbe@vera ~]$ module load GCC/8.3.0 CUDA/10.1.243 OpenMPI/3.1.4 TensorFlow/2.2.0-Python-3.7.4

After loading the TensorFlow/2.2.0-Python-3.7.4 module your environment is now configured to start calling Tensorflow from Python. Here is a small test that prints the Tensorflow version available in your environment:

[hebbe@vera ~]$ python -c "import tensorflow as tf; print(tf.__version__)"

If you intend to run your calculations on GPU hardware it can be useful to check that Tensorflow detects the GPU hardware using its device_lib submodule. Here is an example from a node equipped with a Nvidia Quadro GPU.

[hebbe@vera ~]$ python -c "from tensorflow.python.client import device_lib; device_lib.list_local_devices()"
2020-06-01 16:08:00.418439: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties: 
pciBusID: 0000:3b:00.0 name: Quadro P2000 computeCapability: 6.1
coreClock: 1.4805GHz coreCount: 8 deviceMemorySize: 4.94GiB deviceMemoryBandwidth: 130.53GiB/s
2020-06-01 16:08:00.434406: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0
2020-06-01 16:08:00.434436: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1
2020-06-01 16:08:00.926384: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-06-01 16:08:00.926435: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1108]      0 
2020-06-01 16:08:00.926442: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1121] 0:   N 
2020-06-01 16:08:00.927639: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1247] Created TensorFlow device (/device:GPU:0 with 4454 MB memory) -> physical GPU (device: 0, name: Quadro P2000, pci bus id: 0000:3b:00.0, compute capability: 6.1)