JupyterLab
This is a simple guide to launch a JupyterLab instance from a GPU node.
To execute JupyterLab instances on a GPU node you have to first create a Python virtual environment following these instructions. You can use Conda or one of the Python versions available in LMOD. Note: you only have to do this once.
To launch your JupyterLab instance first allocate a GPU job:
Now launch your JupyterLab instance:
You should see something like this.
To access the server, open this file in a browser:
file:///home/c3_username/.local/share/jupyter/runtime/jpserver-619173-open.html
Or copy and paste one of these URLs:
http://10.119.12.71:8888/lab?token=58e3faf95c177d7e5e3f357b79ec7334b343930222067b81
http://127.0.0.1:8888/lab?token=58e3faf95c177d7e5e3f357b79ec7334b343930222067b81
Allowed Ports
Whenever you launch Jupyter without specifying a port it will try to use port 8888 by default. However, if 8888 is already in use it will use something else. Since we only allow ports 8888 through 8897 on the GPU nodes, you might have to specify it manually, otherwise you won't be able to connect to your Jupyter instance easily: