pytorch-quantizationA fake package to warn the user they are not installing the correct package.nvidiadeeplearningmachinesupervisedunsupervisedreinforcementloggingdeep-learninggpu-accelerationinferencetensorrt
tensorrt-llmTensorRT-LLM: A TensorRT Toolbox for Large Language Modelsnvidiatensorrtdeeplearninginference
polygraphy-trtexecPolygraphy Trtexec: Extension to run on trtexec backenddeep-learninggpu-accelerationinferencenvidiatensorrt
tritonclientPython client library and utilities for communicating with Triton Inference Servergrpchttptritontensorrtinferenceserverserviceclientnvidia
tensorrt-cu11A high performance deep learning inference librarynvidiatensorrtdeeplearninginferencedeep-learninggpu-acceleration
torch-tensorrtTorch-TensorRT is a package which allows users to automatically compile PyTorch and TorchScript modules to TensorRT while remaining in PyTorchpytorchtorchtensorrttrtaiartificialintelligencemlmachinelearningdldeepcompilerdynamotorchscriptinferencecudadeep-learningjetsonlibtorchmachine-learningnvidia
tensorrt-dispatch-cu11-libsTensorRT Librariesnvidiatensorrtdeeplearninginferencedeep-learninggpu-acceleration
tensorrt-cu12A high performance deep learning inference librarynvidiatensorrtdeeplearninginferencedeep-learninggpu-acceleration