Publications/Reports

VELOCITI: Can Video-Language Models Bind Semantic Concepts Through Time?

We create a new benchmark to evaluate video VLMs both contrastive and LLM based. The tasks are designed to evaluate fine-grained compositional understanding abilities of VLMs. All of the open-source VLMs perform close to random. Gemini outperforms all but with a significant gap to human performance.

Darshana Saravanan, Darshan Singh, Varun Gupta, Zeeshan Khan, Vineet Gandhi, Makarand Tapaswi,

under submission 2024

Paper / Project Page / Code (Github)

MICap: A Unified Model for Identity-aware Movie Descriptions

We design a single stage framework for Identity aware captioning of movie videos, we also propose a new captioning metric called iSPICE, that is sensitive to wrong identiities in captions.

Haran Raajesh, Naveen Reddy Desanur, Zeeshan Khan, Makarand Tapaswi,

In the Conference on Computer Vision and Pattern Recognition (CVPR) 2024

Paper / Project Page / Code (Github)

Grounded Video Situtation Recognition

We formulate a new structured framework for dense video understanding and propose a Transformer based model, VideoWhisperer that operates on a group of clips and jointly predicts all the salient actions, Semantic roles via captioning and, spatio temporal grounding in a weakly supervised setting

Zeeshan Khan, C.V. Jawahar, Makarand Tapaswi

In Neural Information Processing Systems (NeurIPS), 2022

Paper / Project Page / Code (Github)

More Parameters No Thanks!

We propose to recursively prune and retrain a Transformer to find language dependent submodules that involves 2 type of paramteres, 1)Shared multlingual and 2)Unique Language dependent parameters, to overcome negative interference in Multilingual Neural Machine translation.

Zeeshan Khan, Kartheek Akella, Vinay Namboodiri, and C.V. Jawahar

In Association For Computational Linguistics (ACL) (Findings), 2021

Paper / Project Page / Code (Github)

DeepHS-HDRVideo : Deep High Speed High Dynamic Range Video Reconstruction

This is the first attempt towards generating high speed high dynamic range videos from low speed low dynamic range videos. We use video frame interpolation to recursivrly generate the high and low exposure images missing in the input alternative exposure frames. The High and Low exposure frames are merged at each timestep to generate an HDR video.

Zeeshan Khan, Parth Shettiwar, Mukul Khanna, Shanmuganathan Raman

In International Conference on Pattern Recognition(ICPR), 2022 (ORAL)

Paper / Video

Appearance Consistent Human Pose Transfer via Dynamic Feature Selection

We present a robut deep architecture for Appearance Consistent person image generation in novel poses. We incorporate a 3 stream network, for image, pose, and appearance. Additionaly we use Gated convolutions and, Non-local attention blocks for generating realistic images.

Ashish Tiwari, Zeeshan Khan, Shanmuganathan Raman

In International Conference on Pattern Recognition (ICPR), 2022

Paper

Exploring Pair-Wise NMT for Indian Languages

We address the task of improving pair-wise machine translation for low resource Indian languages using a filtered back-translation process and subsequent fine-tuning on the limited pair-wise language corpora

Kartheek Akella, Sai Himal Allu, Sridhar Suresh Ragupathi, Aman Singhal,Zeeshan Khan, Vinay Namboodiri, and C.V. Jawahar

In International Conference on Natural Language Processing(ICON) 2020

Paper

FHDR: HDR Image Reconstruction from a Single LDR Image using Feedback Network

Proposed a recurrent Feedback CNN for HDR image reconstruction from a single exposure LDR image, achieving SOTA results on all the HDR benchmarks. Designed a novel Dense Feedback Block using hidden states of RNN, to transfer the high-level information to the low-level features. LDR to HDR representations are learned in multiple iterations via feedback loops.

Zeeshan Khan, Mukul khanna, and Prof. Shanmuganathan Raman

In Global Conference on Signal and Information Processing (GlobalSIP) 2019 (ORAL)

Paper / Code (Github)