Peyman is a Principal Scientist / Director at Google Research, where he leads the Computational Imaging team. Prior to this, he was a Professor of Electrical Engineering at UC Santa Cruz from 1999-2014. He was Associate Dean for Research at the School of Engineering from 2010-12. From 2012-2014 he was on leave at Google-x, where he helped develop the imaging pipeline for Google Glass.

Most recently, Peyman's team at Google developed the digital zoom pipeline for the Pixel phones, which includes the multi-frame super-resolution (Super Res Zoom) pipeline (blog, and project website), and the RAISR upscaling algorithm. In addition, the Night Sight mode on Pixel 3 uses our Super Res Zoom technology to merge images (whether you zoom or not) for vivid shots in low light, including astrophotography.

Peyman received his undergraduate education in electrical engineering and mathematics from the University of California, Berkeley, and the MS and PhD degrees in electrical engineering from the Massachusetts Institute of Technology. He holds 15 patents, several of which are commercially licensed. He founded MotionDSP, which was acquired by Cubic Inc. (NYSE:CUB).

Peyman has been keynote speaker at numerous technical conferences including Picture Coding Symposium (PCS), SIAM Imaging Sciences, SPIE, and the International Conference on Multimedia (ICME). Along with his students, he has won several best paper awards from the IEEE Signal Processing Society.

He is a Distinguished Lecturer of the IEEE Signal Processing Society, and a Fellow of the IEEE "for contributions to inverse problems and super-resolution in imaging."

Recent News (June 2020): We have 3 papers in CVPR 2020 -- two regular conference and one workshop. Here are the summaries and links to relevant material are below.

"GIFnets: Differentiable GIF Encoding Framework": We introduce (to our knowledge), the first differentiable GIF encoding pipeline. It includes three novel neural networks: PaletteNet, DitherNet, and BandingNet. Each provides an important functionality within the GIF encoding pipeline. PaletteNet predicts a near-optimal color palette given an input image. DitherNet manipulates the input image to reduce color banding artifacts and provides an alternative to traditional dithering. Finally, BandingNet is designed to detect color banding, and provides a new perceptual loss specifically for GIF images.

"Distortion Agnostic Deep Watermarking": We develop a framework for distortion-agnostic watermarking, where the image distortion is not explicitly modeled during training. Instead, the robustness of our system comes from two sources: adversarial training and channel coding. Compared to training on a fixed set of distortions and noise levels, our method achieves comparable or better results on distortions available during training, and better performance overall on unknown distortions.

"LIDIA: Lightweight Learned Image Denoising with Instance Adaptation": We use a combination of supervised and unsupervised training, where the first stage gets a general denoiser and the second does instance adaptation. LIDIA produces near state-of-the-art quality, while having relatively very small number of parameters as compared to the leading methods