ParaView Remote Rendering
Running Paraview Remote Rendering in Oscar
The Center for Computation and Visualization (CCV) offers to the academic community a way to visualize large datasets using Oscar and its powerful GPUs as a rendering server. The current GPU hardware and available memory on Oscar surpasses the common desktop models, offering a modern and robust solution to display large datasets in parallel jobs using the widely used opensource software Paraview. It is a simple two-steps process. Start the server and connect the client.
The target audience for this service are members of the academic community that interact, and analyze large 3D datasets, i.e., point clouds, volumetric data, tiff-stacks and mesh-data. This includes groups working with microscopy data, MRI images, structural analysis, fluid dynamics, climate sciences, astrophysics and more. In fact, ParaView can handle over 100 different file formats. The remote rendering service is targeted to scenarios where the personal/lab computer setup may not have the resources to handle the size of the underlying datasets. Common obstacles are older GPU technology or low RAM availability which may cause performance issues.
Research areas that can benefit from ParaView's Remote Rendering Service
Above is a graphical representation on how the parallel render server works using Oscar. The user logins to Oscar either via SSH or VNC session. From the terminal, the user loads the Paraview module and executes the convinience script called
run-remote-serverto start the Paraview server session and allocate the memory and walltime limit. Once the server starts, the user receives an email with the information needed to access the server. Lastly, the user connects the Paraview client (i.e., desktop application) to the server that is running in Oscar. The client displays images that are processed by the server (on Oscar) which reconstructs the information computed by the nodes.
You can either download the Paraview Desktop App to your presonal computer or access the desktop application already installed in Oscar's VNC. Installing in your local computer may give you better interactivity.
Go to the official Paraview Download website. Select your Operational system (Linux, Windows or Mac) and get the file
ParaView-5.9.0-Windows-Python3. Install in your environment, go to the installation directory and open Paraview.
module load paraview/5.9.0
You need to allocate the resources via SLURM indicating the amount of memory you want to reserve, as well as a few optional parameters to configure your session. We have created a convinience script for you to do so called
In order to have the
run-remote-serverbe found we need to load the Paraview module that supports this service (this appends the correct path to out
-uindicates where the confirmation email will be sent. Technically it could be any email address, but the remote render session can only be used by existing Oscar users.
The number of CPU cores and GPUs are determined by the memory request.
By default, the
run-remote-serverscript's minimum memory request is 45 GB (1 CPU/GPU ) and the maximum is 180 GB (4 CPU/GPU ). You can add more resources to your session using the
-mflag. Every multiplier of 45GB adds a CPU core and a GPU. i.e :
# reserves 1 cpu/gpu
run-remote-server -m 45M -u [email protected]
# reserves 1 cpu/gpu
run-remote-server -m 40G -u [email protected]
# reserves 2 cpu/gpu
run-remote-server -m 90G -u [email protected]
# reserves 2 cpu/gpu
run-remote-server -m 120G -u [email protected]
# reserves 4 cpu/gpu
run-remote-server -m 180G -u [email protected]
The following is the description of the command and the available configuration settings.
usage: run-remote-render [-n cores] [-t walltime] [-m memory] [-q queue] [-o outfile] [-g ngpus] [-u user brown email] Allocates resources, start up the render server and send and email to the user requesting the service options: -t walltime as hh:mm:ss (default: 1:30:00) -m memory as #[k|m|g] (default: 45G) -o outfile save a copy of the session's output to outfile (default: off) -q slurm partition (gpu (default)| gpu-he) -u brown email of the user requesting the service
After executing the command, the system will allocate resources, and it will send a confirmation email indicating that the service is ready; the email includes additional instruction on how to connect to the server using Paraview UI.
In the email sent by the system has important information such as :
- How to create an SSH tunnel
- The IP address and port where the service is deployed
- How to connect to the server from multiple systems
Please read it and get familiar on how the process works.
There are two options to connect to the remote server:
- Your personal computer
Open a terminal and execute the command:
ssh -N -L <port-number>:<SERVER_IP>:<port-number> <your_brown_id>@ssh.ccv.brown.edu
<SERVER_IP>This is the ip of the compute node in Oscar. Replace with the value sent in the confirmation email
<port-number>This is the port exposed to access the rendering server. Replace with the value sent in the confirmation email
<your_brown_id>is your Brown username (It should be the same used to connect to Oscar)
This step will reset the scene, so before doing it make sure to save all your data.
- 2.In paraview UI go to menu bar File -> Connect ..
- 3.Add Server:
- 1.Name the connection ‘Remote Rendering’’
- 2.Select Server type ‘Client / Server’,.
- 3.The host is the IP sent in the email.
- 4.The port also comes from the email
- 5.In the next screen, select Startup Type : Manual.
- 6.Click on Save
- 7.Select the new created connection and click ‘Connect’
After a few seconds, you get connected to the HPC automatically.
In Paraview UI go to the menu bar
Memory Inspector. You will notice a list of servers indicating the number of processes running on them