<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
</head>
<body bgcolor="#FFFFFF" text="#000000">
On 11/18/12 4:43 PM, Bonny Banerjee wrote:
<div class="moz-forward-container"><br>
<table border="0" cellpadding="0" cellspacing="0">
<tbody>
<tr>
<td style="font: inherit;" valign="top">
<div><font face="arial" size="2">Dear John,</font></div>
<div><font face="arial" size="2"><br>
</font></div>
<div><font face="arial" size="2">I have been following
your work lately and have been very interested in your
approach where you try to answer important questions
from multiple fields instead of staying limited to a
small field. Personally, I think only that kind of
approach can lead to a general theory of brain-mind.
Hope to read your book sometime soon.</font></div>
<div><font face="arial" size="2"><br>
</font></div>
<div><font face="arial" size="2">I am writing this email
to know your opinion on an issue that has bugged me
for quite some time.</font></div>
<div><font face="arial" size="2"><br>
</font></div>
<div><font face="arial" size="2">In neural networks,
complex receptive field structures (or features) in
higher layer neurons can be learned from simpler
features in lower layer neurons in at least two
different ways – by the principle of spatial
organization that follows from the seminal work of
Hubel and Wiesel, and by the principle of linear
superposition that is utilized widely in machine
learning applications with impressive results.</font></div>
<div><font face="arial" size="2"><br>
</font></div>
<div><font face="arial" size="2">In the principle of
spatial organization, each neuron in the lower layer
receives input from a unique region in space. Two or
more neurons might have some overlap in their inputs
but the overlap is always less than 100%. The physical
size of receptive fields increases as we ascend up the
hierarchy. A higher layer feature is learned by
generating strong connections with a subset of neurons
in the lower layer, the subset is determined by the
input data.</font></div>
<div><font face="arial" size="2"><br>
</font></div>
<div><font face="arial" size="2">In the principle of
linear superposition, all neurons in the lower layer
receive input from the same region in space.
Therefore, all neurons always have 100% overlap in
their inputs. The physical size of receptive fields
remain constant throughout the hierarchy. However, the
functional receptive field size increases and
resolution decreases as we ascend up the hierarchy.
That is, higher layer neurons are less sensitive to
smaller spatial structures as they encode a large
space in a small field. As in the case of spatial
organization, a higher layer feature is learned by
generating strong connections with a subset of neurons
in the lower layer, the subset is determined by the
input data. </font></div>
<div><font face="arial" size="2"><br>
</font></div>
<div><font face="arial" size="2">The attached figure
illustrates the two principles using a caricature of
center-surround receptive fields in the lower layer
and a simple cell receptive field in the higher layer.
Which of the two principles do you think is employed
by the brain? And why?</font></div>
<div><font face="arial" size="2"><br>
</font></div>
<div><font face="arial" size="2">Would really appreciate
your response.</font></div>
<div><font face="arial" size="2"><br>
</font></div>
<div><font face="arial" size="2">Best regards,</font></div>
<div><font face="arial" size="2">Bonny</font></div>
<div><br>
</div>
<div><font face="arial" size="2">
<div>---</div>
<div>Bonny Banerjee, Ph.D.</div>
<div>Assistant Professor</div>
<div>Institute for Intelligent Systems, and Electrical
& Computer Engineering</div>
<div>The University of Memphis</div>
<div>208B Engineering Science Bldg</div>
<div>Ph: 1-901-678-4498</div>
<div>Fax: 1-901-678-5469</div>
<div>Web: <a class="moz-txt-link-freetext" href="http://sites.google.com/site/bonnybanerjee1/">http://sites.google.com/site/bonnybanerjee1/</a></div>
<div><br>
</div>
</font></div>
</td>
</tr>
</tbody>
</table>
<br>
<br>
<br>
<div class="moz-cite-prefix">On 11/19/12 9:21 PM, Juyang Weng
wrote:<br>
</div>
<blockquote cite="mid:50AAE90C.4030000@cse.msu.edu" type="cite">
Dear Bonny,<br>
<br>
I had the same questions as yours over 20 years ago when we did
Cresceptron. Both are based on a cascade idea, which
Cresceptron used. <br>
<br>
I think that the cascade idea (or, deep learning idea) is
largely superficial, secondary, and incorrect.<br>
<br>
The first and primary mechanism in the brain seems to be the
directly pattern match. That is, shallow match first. Since
this idea is hard to publish without solid neuroscience
backing, I expressed this controversial idea in the following
article:<a moz-do-not-send="true"
href="http://www.brain-mind-magazine.org/read.php?file=BMM-V1-N2-paper4-AI.pdf#view">
A Theoretical Proof Bridged the Two AI Schools but a Major AI
Journal Desk-Rejected It</a> <br>
<br>
Do you mind if I post your email to the BMI mailing list so that
more people can benefit from such discussions? Such views are
very difficult to be published in any peer reviewed
publications, since our respected peer reviewers will reject
them. <br>
<br>
Best,<br>
<br>
-John</blockquote>
</div>
<br>
On 11/20/12 8:55 AM, Bonny Banerjee wrote:<br>
<br>
> John,<br>
> Thanks for your response.<br>
> Note that the question I asked (along with the figure) is taken
word-by-word from a paper I wrote that has recently been accepted
for publication in the Neurocomputing journal. Please feel free to
post my question and the figure in the BMI mailing list with the
following reference.<br>
> Bonny Banerjee. SELP: A general-purpose framework for learning
the norms from saliencies in spatiotemporal data. Neurocomputing,
Elsevier. [To appear]<br>
> Best,<br>
> Bonny<br>
<br>
</body>
</html>