I try to take part in some Kaggle competitions, with mixed results. But recently I have been forced to reinstall system quite a few times. Setting it up with all libraries etc. takes a while. So, I have decided to dockerize everything. nVidia has released their wrapper on top of the docker that allows access to the GPU from within the container, see nvidia-docker. That is even better. I thought that now it will be super easy to have all libraries compiled for GPU use and within one container. How naive was that.
The main problem is that during the build process, even if you use nvidia-docker, the docker does not have access to the GPU. Hence, you cannot compile anything that needs a GPU as it often needs physical access to the card, or at least drivers which are not available within the container during the build.
Luckily I found a workaround that will not involve any manual compilation. I build a base image, then I run container using nvidia-docker and launch a shell script that compiles, for example, GPU version of Tensorflow, I stop the container, commit it, and launch build of another image based on the one that I have just created by committing the container. This way I get an image with everything I need, compiled for GPU use, and without any manual interactions (hopefully).
After I finish preparing the final image I will probably publish all scripts and Dockerfiles somewhere (github?), so stay tuned.