Hi! I am a first year PhD student in the Willow team at Inria and École Normale Supérieure in Paris, advised by Shizhe Chen and Cordelia Schmid. I am working in vision-language understanding and generation.

I did my Masters by Research in Computer Science from CVIT IIIT Hyderabad advised by C.V. Jawahar and Makarand Tapaswi. My thesis was on Situation Recognition for Holistic Video Understanding.

Prior to this I was a Research Assistant in the Computer Vision lab at IIT Gandhinagar, advised by Shanmuganathan Raman. I worked in Computational Photography specifically in high dynamic range video reconstruction and generative modeling in appearance consistent human pose transfer.

I am interested in holistic video representation learning, and its application in both vision-language perception and generation. I aim at building machines that can interact with the real world, possessing multimodal sensing abilities, long term spatio-temporal reasoning, and the ability to generate natural language.

CV / Google Scholar / Github / LinkedIn /

News


September, 2023 : Started PhD in the Willow team of Inria Paris.

September, 2022 : One paper accepted to NeurIPS 2022! We formulate a new structured framework for dense video understanding and propose a Transformer based model, VideoWhisperer that operates on a group of clips and jointly predicts all the salient actions, Semantic roles via captioning and, spatio temporal grounding in a weakly supervised setting.

April, 2022 : Two papers accepted to ICPR 2022!, The first one is the first attempt towards generating high speed high dynamic range videos from low speed low dynamic range videos, The second one is on identity aware person image generation in novel poses

August, 2021 : joining IIIT Hyderabad as a full time MS by research student at CVIT, I will be advised by Prof. C.V. Jawahar

May, 2021 : One Paper accepted to ACL 2021 ! (findings), we propose to recursively prune and retrain a Transformer and find language dependent submodules to overcome negative interference in Multilingual Neural Machine translation

See all news

zeeshan.khan@inria.fr
Office: C-412
Address: 2 Rue Simone IFF, 75012 Paris France

Publications


Grounded Video Situtation Recognition

We formulate a new structured framework for dense video understanding and propose a Transformer based model, VideoWhisperer that operates on a group of clips and jointly predicts all the salient actions, Semantic roles via captioning and, spatio temporal grounding in a weakly supervised setting

Zeeshan Khan, C.V. Jawahar, Makarand Tapaswi

In Neural Information Processing Systems (NeurIPS), 2022

Paper / Project Page / Code (Github)

More Parameters No Thanks!

We propose to recursively prune and retrain a Transformer to find language dependent submodules that involves 2 type of paramteres, 1)Shared multlingual and 2)Unique Language dependent parameters, to overcome negative interference in Multilingual Neural Machine translation.

Zeeshan Khan, Kartheek Akella, Vinay Namboodiri, and C.V. Jawahar

In Association For Computational Linguistics (ACL) (Findings), 2021

Paper / Project Page / Code (Github)

DeepHS-HDRVideo : Deep High Speed High Dynamic Range Video Reconstruction

This is the first attempt towards generating high speed high dynamic range videos from low speed low dynamic range videos. We use video frame interpolation to recursivrly generate the high and low exposure images missing in the input alternative exposure frames. The High and Low exposure frames are merged at each timestep to generate an HDR video.

Zeeshan Khan, Parth Shettiwar, Mukul Khanna, Shanmuganathan Raman

In International Conference on Pattern Recognition(ICPR), 2022 (ORAL)

Paper / Video

Appearance Consistent Human Pose Transfer via Dynamic Feature Selection

We present a robut deep architecture for Appearance Consistent person image generation in novel poses. We incorporate a 3 stream network, for image, pose, and appearance. Additionaly we use Gated convolutions and, Non-local attention blocks for generating realistic images.

Ashish Tiwari, Zeeshan Khan, Shanmuganathan Raman

In International Conference on Pattern Recognition (ICPR), 2022

Paper

Exploring Pair-Wise NMT for Indian Languages

We address the task of improving pair-wise machine translation for low resource Indian languages using a filtered back-translation process and subsequent fine-tuning on the limited pair-wise language corpora

Kartheek Akella, Sai Himal Allu, Sridhar Suresh Ragupathi, Aman Singhal,Zeeshan Khan, Vinay Namboodiri, and C.V. Jawahar

In International Conference on Natural Language Processing(ICON) 2020

Paper

FHDR: HDR Image Reconstruction from a Single LDR Image using Feedback Network

Proposed a recurrent Feedback CNN for HDR image reconstruction from a single exposure LDR image, achieving SOTA results on all the HDR benchmarks. Designed a novel Dense Feedback Block using hidden states of RNN, to transfer the high-level information to the low-level features. LDR to HDR representations are learned in multiple iterations via feedback loops.

Zeeshan Khan, Mukul khanna, and Prof. Shanmuganathan Raman

In Global Conference on Signal and Information Processing (GlobalSIP) 2019 (ORAL)

Paper / Code (Github)


See all publications