News

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 1000.00 MiB. GPU 0 has a total capacty of 23.46 GiB of which 406.88 MiB is free. Including non-PyTorch memory, this process has 23.06 ...
co-author with @jikunshang Motivation. This RFC is aiming to reuse GPUWorker and GPUModelRunner for any GPGPU devices, such as CUDA, ROCM and Intel GPU(aka: XPU). By doing so, we can remove redunda ...