Deadline extended to Friday, October 10, 2025
COGS 2025 will consist of two competition tracks, with each track targeting one part of the overall 3D Gaussian splatting pipeline.
Participants are welcome to submit to one or both of these tracks.
Submissions will be evaluated on an NVidia H100 SXM with 80 GB of memory. You can find instructions about how to submit your code here.
If you have any questions for us, please email us at cogs2025-admins AT googlegroups DOT com
If you email us, please allow 1-2 business days for a reply before contacting us again.
Re-implement FlashGS’s rasterizer (the flash_gaussian_splatting CUDA extension) so that every existing FlashGS script runs unchanged, but:
• renders faster, and
• preserves visual quality (frame-level + temporal).
Baseline library (CUDA Python): FlashGS GitHub
Pre-trained Gaussian point clouds (static scenes & avatars – 14 GB) : Pre-trained Models zip on the original 3DGS repo
Ground-truth frames + camera paths for every scene: Inside each scene folder (test_cameras.json, reference/)
Datasets that produced those Gaussians: Mip-NeRF 360 · Tanks & Temples (Truck / Train)· Deep Blending (Dr Johnson / Playroom) GitHub
Metric script (PSNR, SSIM, LPIPS): metrics.py from the 3DGS repo
You can modify any files that you wish inside the FlashGS repo, as long as the requirements outlined in this section are met.
The baseline repo exposes a compiled module named flash_gaussian_splatting that is built from the sources in csrc/ via setup.py. Your submission must keep exactly that layout and module name:
submission.zip
├─ csrc/
│ ├─ cuda_rasterizer/ # your .cu kernels
│ └─ pybind.cpp # pybind11 interface (or Triton launcher)
├─ example.py # same CLI as upstream
├─ setup.py # builds flash_gaussian_splatting
├─ requirements.txt # extra wheels (≤ 5)
└─ README.md # build + usage, ≤ 200 lines
IMPORTANT: any script that currently runs:
import flash_gaussian_splatting as fgs
rgb = fgs.render(points, view_matrix, width, height)
must still run after we install your package.
Inside your README.md file, you must provide:
1. CUDA & driver versions used.
2. Build command (e.g., python setup.py install).
3. Extra env-vars (e.g., TORCH_CUDA_ARCH_LIST).
4. How to reproduce your speed claim on example.py.
5. Known quirks or precision switches (≤ 200 lines total).
# create env
conda create -n flashgs python=3.10 -y
conda activate flashgs
pip install -r requirements.txt # add torch, lpips, etc.
# build your extension
python setup.py install
# baseline speed test
python example.py ../models/drjohnson.ply
# compare FPS to upstream FlashGS (should be higher)
If you see this message :
import error GLIBCXX_3.4.32 not found
this is because the conda environment cannot find your system's libc++ install. To fix this, do:
conda install -c conda-forge libstdcxx-ng
inside your conda environment.
1. Build
pip install -r requirements.txt # your wheels
python setup.py install # compiles csrc/*
2. Render + time each scene with your example.py clone.
3. Compute metrics (PSNR, SSIM, LPIPS).
4. Score: Q_frame & latency_ms
Deliver smaller Gaussian models (.ply) for static scenes without modifying the FlashGS renderer. The baseline renderer must load your models through a tiny loader shim, render them unmodified, and reach the same image quality.
• The original 3D Gaussian splatting repo & pre-trained .ply files.
• Full raw datasets (in case you want to re-train) as provided in Track 1 above.
You cannot use any external images for training. Any re-training must only use the provided original datasets.
Post-training Compression: Any lossless or lossy method (pruning, quantization, entropy coding, clustering, etc.)
Compression-aware Training: Run the official training script and save the model at 30 k steps (same hyper-params) with your compression algorithm.
submission.zip
├─ models/
│ ├─ garden_comp.ply
│ ├─ truck_comp.ply
│ └─ ... (one per scene, same filenames)
├─ loader.py # makes FlashGS understand your format
├─ README.md # how you produced the files, ≤ 200 lines
└─ requirements.txt # extra wheels (≤ 5)
loader.py interface
def load(path: str):
"""Return a flash_gaussian_splatting.GaussianModel-compatible object."""
Our scripts will:
from loader import load
model = load("models/garden_comp.ply") # must succeed
Everything else in FlashGS must remain unchanged.
1. pip install -r requirements.txt
2. model = loader.load(comp_ply)
3. Render the full camera path with baseline FlashGS.
4. Compute same metrics as Track 1 (PSNR, SSIM, LPIPS).
5. Record file size of each compressed.ply.
6. Time the render (latency).
7. Score (averaged across all scenes).