CVCUDA Installation references
cvcuda documentation
cvcuda github repo
Run samples in docker
1. Download and install cvcuda samples from CVCUDA release. The release files are all based on cuda 12.
2. Run the docker, this image comes with python 3.8:
Tip: Find the desired base docker image with TenserRT and cuda 12 from tensorrt container release notes page.
3. Ensure the scripts are executable: (From step 3 to 6, the cmd are run in the docker container.)
4. Install sample dependencies:
Basically, step by step run the script in ./scripts/install_dependencies.sh
with the following modification:
a) pip3 package versions
In script, we can do the following (replacing line 55):
b) Refine exporting PATH: replace echo “export PATH=$PATH:" with "export PATH=\${PATH}:"
c) Edit setup.py
for torchnvjpeg
to specify std:c++17
as the compiler to be compatible with pytorch
.
5. Install the CV-CUDA packages in the docker.
6. Build and run the samples:
7. Check the sample run results in the local terminal outside the docker:
Run segmentation triton sample
Triton server docker run the segmentation on triton example with corrections.
Set up triton server
1. Find the right tritonserver release docker image that matches our devbox setup (release note, tags)
I’ll go with image 23.02-py3.
Install the libs first as in the last section.
2. Start the triton server docker with the cvcuda mounted:
3. Install dependencies on the triton server docker:
The code to be copied:
Install dependencies:
4. Install cmake that meets the version requirement into /usr/local/bin
which comes before /usr/bin
in system PATH
where the existing cmake is.
5. Build the samples
6. Start the triton server:
Set up triton client
1. Start docker to run triton client:
2. Install dependencies into the client docker run:
3. Run the segmentation on folder containing images: