PPoPP 2026
Sat 31 January - Wed 4 February 2026 Sydney, Australia
co-located with HPCA/CGO/PPoPP/CC 2026
Tue 3 Feb 2026 11:50 - 12:10 at Balmoral - Mixed Precision and Quantization Chair(s): Dingwen Tao

While Large Language Models (LLMs) are widely adopted, their massive parameter size constrains practical deployment. A common solution is clustering-based non-uniform quantization, which effectively compresses models to as low as 3 bits per weight while preserving high accuracy. However, instead of accelerating memory-bound LLM inference, the memory reduction paradoxically often causes a significant slowdown due to dequantization overhead and GPU underutilization.
To address the issue, we propose Quantix, a framework designed to convert memory savings into inference speedups. Quantix applies two key optimizations: (1) a hardware-aligned bit shuffling scheme for efficient data access, and (2) a fused dequantization-multiplication pipeline that effectively maps workloads on both CUDA and Tensor Cores. Quantix enables high-throughput batched inference, delivering average kernel-level speedups of 4.82$\times$ over FP16 cuBLAS and end-to-end speedups of up to 11.46$\times$ over state-of-the-art quantization methods on NVIDIA L40 GPUs.

Tue 3 Feb

Displayed time zone: Hobart change

11:30 - 12:50
Mixed Precision and QuantizationMain Conference at Balmoral
Chair(s): Dingwen Tao Institute of Computing Technology, Chinese Academy of Sciences
11:30
20m
Talk
RoMeo: Mitigating Dual-dimensional Outliers with Rotated Mixed Precision Quantization
Main Conference
Qihao Zhang Tsinghua University, MingLiang Tang Tsinghua University, Mingshu Zhai Tsinghua University, Kinman Lei Tsinghua University, Jidong Zhai Tsinghua University
DOI
11:50
20m
Talk
High-Throughput Non-Uniformly Quantized 3-bit LLM Inference
Main Conference
YuAng Chen Chinese University of Hong Kong, Wenqi Zeng Hong Kong University of Science and Technology, Jeffrey Xu Yu Chinese University of Hong Kong
DOI
12:10
20m
Talk
JanusQuant: Accurate and Efficient 2-bit KV Cache Quantization for Long-Context Inference
Main Conference
Chengyu Sun Wuhan University, Yaqi Xia Wuhan University, Hulin Wang , Donglin Yang Nvidia Corporation, Xiaobo Zhou University of Macau, Dazhao Cheng WuHan University
DOI
12:30
20m
Talk
HierCut: Enabling 16-bit Format Mixed Precision for Molecular Dynamics through Hierarchical CutoffBest Artifact Award
Main Conference
zeyu song Tsinghua University, Lin Gan Tsinghua University, Xiaohui Duan Shandong University, Jiayu Fu Tsinghua University, Zhengrui Li Tsinghua University, Yinuo Wang Tsinghua University, Guangzhao Li Chinese Academy of Sciences, Guangwen Yang Tsinghua University
DOI