[Bmi] Learning receptive field hierarchies
Juyang Weng
weng at cse.msu.edu
Tue Nov 20 15:16:16 EST 2012
On 11/18/12 4:43 PM, Bonny Banerjee wrote:
Dear John,
I have been following your work lately and have been very interested in
your approach where you try to answer important questions from multiple
fields instead of staying limited to a small field. Personally, I think
only that kind of approach can lead to a general theory of brain-mind.
Hope to read your book sometime soon.
I am writing this email to know your opinion on an issue that has bugged
me for quite some time.
In neural networks, complex receptive field structures (or features) in
higher layer neurons can be learned from simpler features in lower layer
neurons in at least two different ways – by the principle of spatial
organization that follows from the seminal work of Hubel and Wiesel, and
by the principle of linear superposition that is utilized widely in
machine learning applications with impressive results.
In the principle of spatial organization, each neuron in the lower layer
receives input from a unique region in space. Two or more neurons might
have some overlap in their inputs but the overlap is always less than
100%. The physical size of receptive fields increases as we ascend up
the hierarchy. A higher layer feature is learned by generating strong
connections with a subset of neurons in the lower layer, the subset is
determined by the input data.
In the principle of linear superposition, all neurons in the lower layer
receive input from the same region in space. Therefore, all neurons
always have 100% overlap in their inputs. The physical size of receptive
fields remain constant throughout the hierarchy. However, the functional
receptive field size increases and resolution decreases as we ascend up
the hierarchy. That is, higher layer neurons are less sensitive to
smaller spatial structures as they encode a large space in a small
field. As in the case of spatial organization, a higher layer feature is
learned by generating strong connections with a subset of neurons in the
lower layer, the subset is determined by the input data.
The attached figure illustrates the two principles using a caricature of
center-surround receptive fields in the lower layer and a simple cell
receptive field in the higher layer. Which of the two principles do you
think is employed by the brain? And why?
Would really appreciate your response.
Best regards,
Bonny
---
Bonny Banerjee, Ph.D.
Assistant Professor
Institute for Intelligent Systems, and Electrical & Computer Engineering
The University of Memphis
208B Engineering Science Bldg
Ph: 1-901-678-4498
Fax: 1-901-678-5469
Web: http://sites.google.com/site/bonnybanerjee1/
On 11/19/12 9:21 PM, Juyang Weng wrote:
> Dear Bonny,
>
> I had the same questions as yours over 20 years ago when we did
> Cresceptron. Both are based on a cascade idea, which Cresceptron used.
>
> I think that the cascade idea (or, deep learning idea) is largely
> superficial, secondary, and incorrect.
>
> The first and primary mechanism in the brain seems to be the directly
> pattern match. That is, shallow match first. Since this idea is hard
> to publish without solid neuroscience backing, I expressed this
> controversial idea in the following article:A Theoretical Proof
> Bridged the Two AI Schools but a Major AI Journal Desk-Rejected It
> <http://www.brain-mind-magazine.org/read.php?file=BMM-V1-N2-paper4-AI.pdf#view>
>
>
> Do you mind if I post your email to the BMI mailing list so that more
> people can benefit from such discussions? Such views are very
> difficult to be published in any peer reviewed publications, since our
> respected peer reviewers will reject them.
>
> Best,
>
> -John
On 11/20/12 8:55 AM, Bonny Banerjee wrote:
> John,
> Thanks for your response.
> Note that the question I asked (along with the figure) is taken
word-by-word from a paper I wrote that has recently been accepted for
publication in the Neurocomputing journal. Please feel free to post my
question and the figure in the BMI mailing list with the following
reference.
> Bonny Banerjee. SELP: A general-purpose framework for learning the
norms from saliencies in spatiotemporal data. Neurocomputing, Elsevier.
[To appear]
> Best,
> Bonny
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.cse.msu.edu/pipermail/bmi/attachments/20121120/0ae263e1/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: LearningReceptiveFieldHierarchies-Banerjee.pdf
Type: application/pdf
Size: 105006 bytes
Desc: not available
URL: <http://lists.cse.msu.edu/pipermail/bmi/attachments/20121120/0ae263e1/attachment-0001.pdf>
More information about the BMI
mailing list