Personal Website

About Me

Hi there! Welcome to my website! My name is Ling-Qi Zhang (张凌祺). Starting October 2023, I have joined Janelia Research Campus as a Theory Fellow.

I received my Ph.D. from the University of Pennsylvania, working with David Brainard and Alan Stocker on computational models of perception and visual system. I also hold an M.A. in Statistics from Penn.

Previously, I got my B.E. in Computer Science, from Southern University of Science and Technology, which is newly established in 2011 at Shenzhen, China (Here is a short story about our university).

I am generally interested in how biological and artificial systems solve challenging computational problems efficiently, using a combination of theoretical and experimental approaches. See below for some of my projects!


Research

Selected Projects

BEHAVIORAL AND NEURAL EFFICIENT CODING OF SPEED
[JNeurosci, 2022] [V-VSS 2021 Poster] [GitHub]

We built an efficient encoding, Bayeisan decoding model for human speed perception in a psychophysical experiment. The model makes specific perdictions regarding the neural encoding characteristics of retinal speed, which we validated by analyzing electrophysiology recording of MT neurons.


BAYESIAN IMAGE RECONSTRUCTION FROM CONE MOSAIC SIGNAL
[eLife, 2022] [V-VSS 2020 Talk] [GitHub]

We built a Bayesian image reconstruction algorithm from cone excitations based on an accurate model of human early vision, in order to understand information loss at the very first step of visual processing. Our model enables quantitative analysis of retinal mosaic design, visualization, and the more “traditional” ideal observer type of analysis.


VISUAL ORIENTATION ENCODING IN INDIVIDUALS WITH AUTISM
[PLOS Biology, 2021] [Primer] [Data+Code]

We compared the accuracy of visual orientation encoding between neurotypical and ASD groups using an information theoretic measure. We found that the ASD group starts with an overall lower encoding capacity, which does not improve when presented with performance feedback. They are also less adaptive to the stimulus statistics in contrast to the neurotypical subjects.


PSYCHOPHYSICS WITH DEEP NEURAL NETWORKS
[CCN, 2019] [Nat. Commun., 2022]

We showed that pretrained neural networks, like humans, have internal representations that overrepresent frequent variable values at the expense of certainty for less common values. Furthermore, we demonstrated that optimized readouts of local visual orientation from these networks’ internal representations show similar orientation biases and geometric illusions, just as human subjects. We also have some theory based on the learning dynamics of graident descent on the origin of efficient codes in neural network.


Publications

(† deonotes co-first authorship)


Contact

19700 Helix Dr
Ashburn, VA 20147

lingqiz [at] sas [dot] upenn [dot] edu