-
The Chinese University of Hong Kong
- Shenzhen, China
Highlights
- Pro
Starred repositories
A high-performance LLM inference API and Chat UI that integrates DeepSeek R1's CoT reasoning traces with Anthropic Claude models.
EfficientGS: Streamlining Gaussian Splatting for Large-Scale High-Resolution Scene Representation
The official code for "Efficient Neural Radiance Fields" in CVPR2022.
Library for training machine learning models with privacy for training data
Open a terminal in colab, including the free tier.
[NeurIPS'24] "Membership Inference Attacks against Fine-tuned Large Language Models via Self-prompt Calibration"
Benchmarking MIAs against LLMs.
Recommend new arxiv papers of your interest daily according to your Zotero libarary.
Source code for Document-level Membership Inference for Large Language Models - see https://arxiv.org/pdf/2310.15007
[ICML 2024] DPZero: Private Fine-Tuning of Language Models without Backpropagation
A new markup-based typesetting system that is powerful and easy to learn.
A tool for running automated static analysis on code posted to a Review Board instance.
Source code of NAACL 2025 Findings "Scaling Up Membership Inference: When and How Attacks Succeed on Large Language Models"
Post-Training Quantization for Vision transformers.
Detecting Training Data of Large Language Models via Expectation Maximization
Python package for measuring memorization in LLMs.
Official Implementation of "ADOPT: Modified Adam Can Converge with Any β2 with the Optimal Rate"
Privacy Meter: An open-source library to audit data privacy in statistical and machine learning algorithms.
❄️🔥 Visual Prompt Tuning [ECCV 2022] https://arxiv.org/abs/2203.12119
Encrypted chat client based on Signal's Double-Ratchet Algorithm
[ICLR 2025 Spotlight] The official implementation of the paper “LOKI:A Comprehensive Synthetic Data Detection Benchmark using Large Multimodal Models”
A library for researching neural networks compression and acceleration methods.