Personal Website
About Me
Hi there! Welcome to my website!
My name is Ling-Qi Zhang. I am currently a Theory Fellow at Janelia Research Campus. I am fascinated by how biological systems can solve challenging computational problems with remarkable efficiency. My research adopts a normative framework to study how the brain best achieves the behavioral goals of animals under various biological constraints. At Janelia, I collaborate closely with experimentalists to develop theory and model of animal behavior in naturalistic environment.
I received my Ph.D. from the University of Pennsylvania, working with David Brainard and Alan Stocker on probabilistic models of perception and neural representation. I also hold an M.A. in Statistics from Penn.
Before that, I got my B.E. in Computer Science from Southern University of Science and Technology, which was newly established in 2011 in Shenzhen, China (Here is a short story about our university).
See below for some of my projects!
Research
Selected Projects
DYNAMIC EFFICIENT CODING IN THE TILT ILLUSION
[bioRxiv, 2024] [GitHub]
We simultaneously obtained psychophysical and fMRI responses in the tilt illusion experiment, and extracted sensory encoding precision (Fisher information) from their behavioral and neural data. We found that in the absence of a surround, encoding reflects the natural scene statistics of orientation. However, in the presence of an oriented surround, encoding precision is significantly increased for stimuli similar to the surround orientation. We suggest that the tilt illusion naturally emerges from a dynamic coding strategy that efficiently reallocates neural coding resources based on the current stimulus context.
OPTIMAL LINEAR MEASUREMENT FOR NATURAL IMAGES
[arXiv, 2024] [GitHub]
The optimal linear measurement of a signal depends on its statistical regularity. Classical techniques, such as PCA and Compressed Sensing (CS), are based on simple statistical models. We introduce a general method for obtaining an optimized set of linear measurements, assuming a Bayesian inverse solution that leverages the prior implicit in a neural network trained to perform denoising (diffusion probabilistic models). We demonstrate that these measurements are distinct from those of PCA and CS, with significant improvements in minimizing squared reconstruction error.
BEHAVIORAL AND NEURAL EFFICIENT CODING OF SPEED
[JNeurosci, 2022] [V-VSS 2021 Poster] [GitHub]
We built an efficient encoding, Bayeisan decoding model for human speed perception in a psychophysical experiment. The model makes specific perdictions regarding the neural encoding characteristics of retinal speed, which we validated by analyzing electrophysiology recording of MT neurons.
BAYESIAN IMAGE RECONSTRUCTION FROM CONE MOSAIC SIGNAL
[eLife, 2022] [V-VSS 2020 Talk] [GitHub]
We built a Bayesian image reconstruction algorithm from cone excitations based on an accurate model of human early vision, in order to understand information loss at the very first step of visual processing. Our model enables quantitative analysis of retinal mosaic design, visualization, and the more “traditional” ideal observer type of analysis.
VISUAL ORIENTATION ENCODING IN INDIVIDUALS WITH AUTISM
[PLOS Biology, 2021] [Primer] [Data+Code]
We compared the accuracy of visual orientation encoding between neurotypical and ASD groups using an information theoretic measure. We found that the ASD group starts with an overall lower encoding capacity, which does not improve when presented with performance feedback. They are also less adaptive to the stimulus statistics in contrast to the neurotypical subjects.
PSYCHOPHYSICS WITH DEEP NEURAL NETWORKS
[CCN, 2019] [Nat. Commun., 2022]
We showed that pretrained neural networks, like humans, have internal representations that overrepresent frequent variable values at the expense of certainty for less common values. Furthermore, we demonstrated that optimized readouts of local visual orientation from these networks’ internal representations show similar orientation biases and geometric illusions, just as human subjects. We also have some theory based on the learning dynamics of graident descent on the origin of efficient codes in neural network.
Publications
LQ Zhang, J Mao, GK Aguirre, and AA Stocker. The tilt illusion arises from an efficient reallocation of neural coding resources at the contextual boundary. bioRxiv, 2024.
LQ Zhang, Z Kadkhodaie, EP Simoncelli, and DH Brainard. Optimized linear measurements for inverse problems using diffusion-based image generation. arXiv, 2024.
AS Benjamin, LQ Zhang, C Qiu, AA Stocker, and KP Kording. Efficient neural codes naturally emerge through gradient descent learning. Nature Communications, 2022.
LQ Zhang and AA Stocker. Prior expectations in visual speed perception predict encoding characteristics of neurons in area MT. Journal of Neuroscience, 2022.
LQ Zhang, NP Cottaris, and DH Brainard. An image reconstruction framework for characterizing initial visual encoding. eLife, 2022.
JP Noel†, LQ Zhang†, AA Stocker, and DE Angelaki. Individuals with autism spectrum disorder have altered visual encoding capacity. PLOS Biology, 2021.
AS Benjamin†, C Qiu†, LQ Zhang†, KP Kording, and AA Stocker. Shared visual illusions between humans and artificial neural networks. 2019 Conference on Cognitive Computational Neuroscience.
MAK Peters†, LQ Zhang†, and L Shams. The material-weight illusion is a Bayes-optimal percept under competing density priors. PeerJ, 2018.
(† deonotes co-first authorship)
Contact
19700 Helix Dr
Ashburn, VA 20147
lingqiz [at] sas [dot] upenn [dot] edu