Launch a Simulation via Linux Terminal
-
Source set_nFX_environment.sh (*.csh)
from the nanoFluidX installation directory.
Note: This sets paths to the CUDA and MPI executables packaged with nanoFluidX.
- Navigate to the directory containing the nanoFluidX case (*.cfg and *.prtl files).
-
Execute
nvidia-smi
.If NVIDIA drivers are properly installed, this command will show the available GPU devices are available.
Figure 1.wc
command:wc EGBX_1mm.prtl -l 5673046 EGBX_1mm.prtl
-
Once you know which GPUs to use, enter the launch command string:
CUDA_VISIBLE_DEVICES=0,1,2,3 nohup mpirun -np 4 $nFX_SP -i EGBX_1mm.cfg &> output.txt &
CUDA_VISIBLE_DEVICES=0,1,2,3 Set the GPUs you want to use, based on the GPU ID number. NB: If you are going to use all the GPUs in a machine then this is not required. nohup Prevent the case from crashing in case the sshconnection is interrupted. mpirun Launch OpenMPI -np 4 Number of GPUs/ranks to be used for the simulation. Must match the CUDA_VISIBLE_DEVICES setting $nFX_SP nanoFluidX binary. NB: On some systems this may require the full path to the executable -i EGBX_1mm.cfg Specify the input file (*.cfg) for the solver &> output.txt Pipe all the output to a log file (including error messages) & hang up (send the job to the background)