Vijay Veerabadran

Welcome to my website! My name is Vijay, and I am a Ph.D. student advised by Dr. Virginia de Sa at the UCSD Cognitive Science department. I am working on developing the next generation of robust computer vision models by taking inspiration from computational principles that underlie primate vision. I also work on developing novel unsupervised representation learning techniques for images and videos.

Prior to starting my Ph.D., I spent a year working at Brown University with Dr. Thomas Serre. In April 2017, I graduated with a bachelors degree in Computer Science and Engineering from SSN College of Engineering, Chennai, India.

Email  /  Github  /  Resume  /  Google Scholar  /  LinkedIn  /  Twitter


News
  • Summer 2021 - Joined as a Research Intern at Facebook AI advised by Yann Lecun, Yubei Chen and Stephane Deny
  • May 2021 - Poster on "Human susceptibility to subtle adversarial image manipulations with unlimited exposure time" accepted at VSS 2021
  • Feb 2021 - Poster on "Human susceptibility to subtle adversarial image manipulations with unlimited exposure time" accepted at COSYNE 2021
  • Sept 2020 - Joining as a Student Researcher at Google Brain, Mountain View, USA
  • Summer 2020 - Joining as a Research Intern at Google Brain, Mountain View, USA
  • Apr 2020 - Short paper on learned adversarial video compression accepted at the Learned Image Compression (CLIC) workshop at CVPR 2020
  • Dec 2019 - Short paper introducing V1Net, a model of horizontal connections accepted at the SVRHM workshop @ NeurIPS 2019
  • Oct 2019 - My thesis work is the core of the project that was awarded a 2019 Kavli Symposium Inspired Proposal award for novel research at the intersection of AI and Neuroscience.
  • Summer 2019 - Joining as a Research Intern at Qualcomm AI Research working with Reza Pourreza, Amirhossein Habibian and Taco Cohen.
  • Sept 2018 - Joining Dr. Virginia de Sa's group at UC San Diego as a Ph.D. student in Cognitive Science.
  • Sept 2018 - Our work at the Serre Lab on inventing a novel recurrent cell has been accepted as a poster at NeurIPS 2018.
  • Aug 2017 - Joined the Serre Lab as a Research Assistant working on Computer Vision.

Research

My recent work is centered around the development of bio-inspired hierarchical and recurrent neural architectures applied to computer vision problems. I hypothesize that such brain-inspired architectures produce more behavioral similarity to human perception, leading to increasingly human-like artificial vision algorithms.

Human susceptibility to subtle adversarial image manipulations with unlimited exposure time
Vijay Veerabadran, Jonathon Shlens, Michael Mozer, Jascha Sohl-Dickstein, Gamaleldin Elsayed
VSS 2021, COSYNE 2021 (Poster Presentation)
Poster link

In this poster, we present our work on creating subtle adversarial image manipulations that influence human perception under unlimited exposure time.

Learning compact generalizable neural representations supporting perceptual grouping
Vijay Veerabadran, Virginia R. de Sa
Paper link

In this preprint, we introduce (a) V1Net - a bio-inspired recurrent convolutional network with horizontal connections and (b) MarkedLong - a synthetic perceptual grouping benchmark.

Adversarial Distortion for Learned Video Compression
Vijay Veerabadran, Reza Pourreza, Amirhossein Habibian, Taco. S. Cohen
Learned Image Compression (CLIC) 2020 (CVPRW 2020) (Poster Presentation)
arxiv

In this paper, we discuss the employment of adversarial distortion to improve the decoding perceptual quality of learned lossy video compression systems under extreme compression.

V1Net: A computational model of long-range horizontal connections
Vijay Veerabadran, Virginia de Sa
Shared Visual Representations in Humans and Machines (Workshop @ NeurIPS 2019) (Poster Presentation)
Paper link

In this paper, we introduce our model of recurrent nonlinear long-range horizontal connections and present initial results on their integration with Deep Convolutional Networks on the task of object boundary detection from natural images.

Learning long-range spatial dependencies with horizontal gated-recurrent units
Drew Linsley, Junkyung Kim, Vijay Veerabadran, Charlie Windolf, Thomas Serre
NeurIPS 2018 (Poster Presentation), CCN 2018 (Poster Presentation)
arxiv / code

Developed a novel recurrent cell inspired by long-range horizontal processing of spatial dependencies in the early visual cortex.


Talks
Neocognitron: A neural network model for a mechanism of visual pattern recognition - Kunihiko Fukushima, Sei Miyake, Takayuki Ito Slides

I gave an introductory talk on the Neocognitron, the predecessor to Convolutional Neural Networks, at Sanjoy Dasgupta's seminar - CSE 254: Neurally Inspired Unsupervised Learning.

Generative Adversarial Networks and Their Applications Slides

In this talk that I delivered at Artifacia Inc., I presented an introduction to Generative Adversarial Networks and their applications to several cutting-edge computer vision problems.


Service
Volunteering at ICML 2019
Reviewer for CogSci 2019
Teaching assistant @ UCSD for COGS 118B: Intro to Machine Learning II (Unsupervised learning) (Fall 2018, Fall 2019).
Teaching assistant @ UCSD for COGS 189: EEG-based Brain-computer interfaces (Winter 2019).
Template overfitting on https://jonbarron.info/