
Deploy the Efficient ViT Segmentation Models¶
Segmentation in image processing¶
Segmentation in the context of machine learning refers to the process of dividing or partitioning data into multiple segments or groups based on shared characteristics. This concept is widely applicable across various fields such as image processing, market analysis, natural language processing, and more. The primary goal of segmentation is to simplify or change the representation of data to make it more meaningful and easier to analyze. In image processing, segmentation involves dividing a digital image into multiple segments (sets of pixels, also known as image objects). The goal is to make the representation of an image more meaningful and easier to analyze by organizing its pixels into segments that are more homogeneous than the entire image.
Efficient ViT¶
EfficientViT is a new family of ViT models for efficient high-resolution dense prediction vision tasks. The core building block of EfficientViT is a lightweight, multi-scale linear attention module that achieves global receptive field and multi-scale learning with only hardware-efficient operations, making EfficientViT TensorRT-friendly and suitable for GPU deployment. In this notebook we demonstrate how to build each models engine files and compare each model in a unified gradio interface!
Credits¶
Efficient ViT was created by the MIT-Han-Lab. Deployment of the notebook is powered by Brev.dev
Getting Started¶
We start by installing the recommended dependancies from the repository README
!wget -L https://raw.githubusercontent.com/mit-han-lab/efficientvit/master/requirements.txt
!pip install openmpi
!pip install -r requirements.txt
!git clone https://github.com/NVIDIA-AI-IOT/torch2trt
!cd torch2trt && python setup.py install
!cd torch2trt && cmake -B build . && cmake --build build --target install && ldconfig
Download model checkpoints¶
We pull each model checkpoint from the Huggingface repo and save them in the assets folder. Since we have access to an A100 and TensorRT, we will be using the ONNX formatted models and converting them to TRT engines. However this process can also be done with the PyTorch models
from huggingface_hub import snapshot_download
snapshot_download("mit-han-lab/efficientvit-sam", local_dir="assets/checkpoints")
# create a folder to store the models
!mkdir -p assets/export_models/sam/tensorrt
Here we build each models encoder and decoder engine based on the type of segmentation. Notice that the L0, L1, and L2 models can ingest resolutions up to 512x512 and the XL0 and XL1 models can ingest resolutions up to 1024x1024.
As a reminder, a TRT engine is essentially an optimized version of the model that is built to run on the current hardware. In our case they're optimized to run on an A100-40GB!
print("Creating L0 TensorRT encoder with side length 512")
!trtexec --onnx=assets/checkpoints/onnx/l0_encoder.onnx \
--minShapes=input_image:1x3x512x512 \
--optShapes=input_image:1x3x512x512 \
--maxShapes=input_image:4x3x512x512 \
--saveEngine=assets/export_models/sam/tensorrt/l0_encoder.engine
print("Creating L0 TensorRT point decoder")
!trtexec --onnx=assets/checkpoints/onnx/l0_decoder.onnx \
--minShapes=point_coords:1x1x2,point_labels:1x1 \
--optShapes=point_coords:1x16x2,point_labels:1x16 \
--maxShapes=point_coords:1x16x2,point_labels:1x16 \
--fp16 \
--saveEngine=assets/export_models/sam/tensorrt/l0_point_decoder.engine
print("Creating L0 TensorRT box decoder")
!trtexec --onnx=assets/checkpoints/onnx/l0_decoder.onnx \
--minShapes=point_coords:1x1x2,point_labels:1x1 \
--optShapes=point_coords:16x2x2,point_labels:16x2 \
--maxShapes=point_coords:16x2x2,point_labels:16x2 \
--fp16 \
--saveEngine=assets/export_models/sam/tensorrt/l0_box_decoder.engine
print("Creating L0 TensorRT full image segmentation decoder")
!trtexec --onnx=assets/checkpoints/onnx/l0_decoder.onnx \
--minShapes=point_coords:1x1x2,point_labels:1x1 \
--optShapes=point_coords:64x1x2,point_labels:64x1 \
--maxShapes=point_coords:128x1x2,point_labels:128x1 \
--fp16 \
--saveEngine=assets/export_models/sam/tensorrt/l0_full_img_decoder.engine
print("Creating L1 TensorRT encoder with side length 512")
!trtexec --onnx=assets/checkpoints/onnx/l1_encoder.onnx \
--minShapes=input_image:1x3x512x512 \
--optShapes=input_image:1x3x512x512 \
--maxShapes=input_image:4x3x512x512 \
--saveEngine=assets/export_models/sam/tensorrt/l1_encoder.engine
print("Creating L1 TensorRT point decoder")
!trtexec --onnx=assets/checkpoints/onnx/l1_decoder.onnx \
--minShapes=point_coords:1x1x2,point_labels:1x1 \
--optShapes=point_coords:1x16x2,point_labels:1x16 \
--maxShapes=point_coords:1x16x2,point_labels:1x16 \
--fp16 \
--saveEngine=assets/export_models/sam/tensorrt/l1_point_decoder.engine
print("Creating L1 TensorRT box decoder")
!trtexec --onnx=assets/checkpoints/onnx/l1_decoder.onnx \
--minShapes=point_coords:1x1x2,point_labels:1x1 \
--optShapes=point_coords:16x2x2,point_labels:16x2 \
--maxShapes=point_coords:16x2x2,point_labels:16x2 \
--fp16 \
--saveEngine=assets/export_models/sam/tensorrt/l1_box_decoder.engine
print("Creating L1 TensorRT full image segmentation decoder")
!trtexec --onnx=assets/checkpoints/onnx/l1_decoder.onnx \
--minShapes=point_coords:1x1x2,point_labels:1x1 \
--optShapes=point_coords:64x1x2,point_labels:64x1 \
--maxShapes=point_coords:128x1x2,point_labels:128x1 \
--fp16 \
--saveEngine=assets/export_models/sam/tensorrt/l1_full_img_decoder.engine
print("Creating L2 TensorRT encoder with side length 512")
!trtexec --onnx=assets/checkpoints/onnx/l2_encoder.onnx \
--minShapes=input_image:1x3x512x512 \
--optShapes=input_image:1x3x512x512 \
--maxShapes=input_image:4x3x512x512 \
--saveEngine=assets/export_models/sam/tensorrt/l2_encoder.engine
print("Creating L2 TensorRT point decoder")
!trtexec --onnx=assets/checkpoints/onnx/l2_decoder.onnx \
--minShapes=point_coords:1x1x2,point_labels:1x1 \
--optShapes=point_coords:1x16x2,point_labels:1x16 \
--maxShapes=point_coords:1x16x2,point_labels:1x16 \
--fp16 \
--saveEngine=assets/export_models/sam/tensorrt/l2_point_decoder.engine
print("Creating L2 TensorRT box decoder")
!trtexec --onnx=assets/checkpoints/onnx/l2_decoder.onnx \
--minShapes=point_coords:1x1x2,point_labels:1x1 \
--optShapes=point_coords:16x2x2,point_labels:16x2 \
--maxShapes=point_coords:16x2x2,point_labels:16x2 \
--fp16 \
--saveEngine=assets/export_models/sam/tensorrt/l2_box_decoder.engine
print("Creating L2 TensorRT full image segmentation decoder")
!trtexec --onnx=assets/checkpoints/onnx/l2_decoder.onnx \
--minShapes=point_coords:1x1x2,point_labels:1x1 \
--optShapes=point_coords:64x1x2,point_labels:64x1 \
--maxShapes=point_coords:128x1x2,point_labels:128x1 \
--fp16 \
--saveEngine=assets/export_models/sam/tensorrt/l2_full_img_decoder.engine
print("Creating XL0 TensorRT encoder with side length 1024")
!trtexec --onnx=assets/checkpoints/onnx/xl0_encoder.onnx \
--minShapes=input_image:1x3x1024x1024 \
--optShapes=input_image:1x3x1024x1024 \
--maxShapes=input_image:4x3x1024x1024 \
--saveEngine=assets/export_models/sam/tensorrt/xl0_encoder.engine
print("Creating XL0 TensorRT point decoder")
!trtexec --onnx=assets/checkpoints/onnx/xl0_decoder.onnx \
--minShapes=point_coords:1x1x2,point_labels:1x1 \
--optShapes=point_coords:1x16x2,point_labels:1x16 \
--maxShapes=point_coords:1x16x2,point_labels:1x16 \
--fp16 \
--saveEngine=assets/export_models/sam/tensorrt/xl0_point_decoder.engine
print("Creating XL0 TensorRT box decoder")
!trtexec --onnx=assets/checkpoints/onnx/xl0_decoder.onnx \
--minShapes=point_coords:1x1x2,point_labels:1x1 \
--optShapes=point_coords:16x2x2,point_labels:16x2 \
--maxShapes=point_coords:16x2x2,point_labels:16x2 \
--fp16 \
--saveEngine=assets/export_models/sam/tensorrt/xl0_box_decoder.engine
print("Creating XL0 TensorRT full image segmentation decoder")
!trtexec --onnx=assets/checkpoints/onnx/xl0_decoder.onnx \
--minShapes=point_coords:1x1x2,point_labels:1x1 \
--optShapes=point_coords:64x1x2,point_labels:64x1 \
--maxShapes=point_coords:128x1x2,point_labels:128x1 \
--fp16 \
--saveEngine=assets/export_models/sam/tensorrt/xl0_full_img_decoder.engine
print("Creating XL1 TensorRT encoder with side length 1024")
!trtexec --onnx=assets/checkpoints/onnx/xl1_encoder.onnx \
--minShapes=input_image:1x3x1024x1024 \
--optShapes=input_image:1x3x1024x1024 \
--maxShapes=input_image:4x3x1024x1024 \
--saveEngine=assets/export_models/sam/tensorrt/xl1_encoder.engine
print("Creating XL1 TensorRT point decoder")
!trtexec --onnx=assets/checkpoints/onnx/xl1_decoder.onnx \
--minShapes=point_coords:1x1x2,point_labels:1x1 \
--optShapes=point_coords:1x16x2,point_labels:1x16 \
--maxShapes=point_coords:1x16x2,point_labels:1x16 \
--fp16 \
--saveEngine=assets/export_models/sam/tensorrt/xl1_point_decoder.engine
print("Creating XL1 TensorRT box decoder")
!trtexec --onnx=assets/checkpoints/onnx/xl1_decoder.onnx \
--minShapes=point_coords:1x1x2,point_labels:1x1 \
--optShapes=point_coords:16x2x2,point_labels:16x2 \
--maxShapes=point_coords:16x2x2,point_labels:16x2 \
--fp16 \
--saveEngine=assets/export_models/sam/tensorrt/xl1_box_decoder.engine
print("Creating XL1 TensorRT full image segmentation decoder")
!trtexec --onnx=assets/checkpoints/onnx/xl1_decoder.onnx \
--minShapes=point_coords:1x1x2,point_labels:1x1 \
--optShapes=point_coords:64x1x2,point_labels:64x1 \
--maxShapes=point_coords:128x1x2,point_labels:128x1 \
--fp16 \
--saveEngine=assets/export_models/sam/tensorrt/xl1_full_img_decoder.engine
Build the gradio web server to host each model¶
Now that we have the TRT engine files, we can leverage the EfficientViT inference code and launch our own gradio server to run the segmentation.
# solves an cv2 import bug
!pip install opencv-python-headless
!git clone https://github.com/mit-han-lab/efficientvit.git
!mv assets/export_models/ efficientvit/assets
!pip show tensorrt
!cd /root/verb-workspace/efficientvit && python -m demo.sam.gradio_web_server --runtime tensorrt