<html>
  <head>
    <meta content="text/html; charset=UTF-8" http-equiv="Content-Type">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    Leonid,<br>
    <br>
    Yes, I agree with you about the exponential complexity of symbolic
    inputs and symbolic states.   Here is a more intuitive way for those
    who are not familiar with discrete complexity theory:<br>
    <br>
    - Two different symbols are simply different.  There is no natural
    distance between them.  Some methods, e.g. Soar, assign each symbol
    with a set of handcrafted features, so that any two symbols have
    distance measured in terms of the handcrafted features.<br>
    This leads to well known high brittleness since such a set of
    handcrafted features is always not sufficient for an open-ended
    world.<br>
    <br>
    - In contrast, the brain users emergent sensory images and emergent
    muscle images.  Objects or actions in such images are "continuous"
    since they arise from the natural world and natural actions.  If our
    DN model is correct, the brain interpolates between an exponential
    number of sensory subimages and action subimages, not by
    mathematical logic, but by associations (spatial statistics).<br>
    <br>
    Many domain experts will laugh or at least doubt when they read the
    above, but the papers cited at the BMI site have more detail.  Those
    who have vision to go through the BMI 6-Disciplinary Program will
    learn rich evidence that support the above explanation.<br>
    <br>
    -John <br>
    <br>
    On 10/22/11 10:59 PM, leonid wrote:
    <blockquote cite="mid:622EF460BEF845EF95A956FAEB36B8C2@jung"
      type="cite">
      <meta content="text/html; charset=UTF-8" http-equiv="Content-Type">
      <meta name="GENERATOR" content="MSHTML 8.00.6001.19154">
      <div dir="ltr" align="left"><span class="328503902-23102011"><font
            color="#0000ff" face="Arial" size="2">John,</font></span></div>
      <div dir="ltr" align="left"><span class="328503902-23102011"></span> </div>
      <div dir="ltr" align="left"><span class="328503902-23102011"><font
            color="#0000ff" face="Arial" size="2">Thank you for Paul
            slides. Actually I participated in the same CLION meeting,
            there were many our friends. Quite productive meeting.</font></span></div>
      <div dir="ltr" align="left"><span class="328503902-23102011"></span> </div>
      <div dir="ltr" align="left"><span class="328503902-23102011"><font
            color="#0000ff" face="Arial" size="2">Why brain functions
            are not based on mathematical logic? - It is a good
            question. It is possible to show that exponential complexity
            of symbolic algorithms is related to logic. The proof
            follows closely Godel's proof of logic inconsistency. In
            case of an infinite system the result is inconsistency, in
            case of a finite system (say, a computer, or brain) the
            result is coombinatorial or exponential complexity.</font></span></div>
      <div dir="ltr" align="left"><span class="328503902-23102011"></span> </div>
      <div dir="ltr" align="left"><span class="328503902-23102011"><font
            color="#0000ff" face="Arial" size="2">Logic appears in brain
            as a result of dynamic processes that start with vague
            states and converge to near logical states. This process
            overcomes combinatorial complexity. I published several
            mathematical modeling papers on this topic. Recently it was
            proven experimentally in Harvard Brain Imaging Center that
            this process "from vague to crisp"  is a good model for
            actual neural processes in visual system during perception.</font></span></div>
      <div dir="ltr" align="left"><span class="328503902-23102011"></span> </div>
      <div dir="ltr" align="left"><span class="328503902-23102011"><font
            color="#0000ff" face="Arial" size="2">Best</font></span></div>
      <div dir="ltr" align="left"><span class="328503902-23102011"><font
            color="#0000ff" face="Arial" size="2">Leonid</font></span></div>
      <br>
      <div dir="ltr" class="OutlookMessageHeader" align="left"
        lang="en-us">
        <hr tabindex="-1">
        <font face="Tahoma" size="2"><b>From:</b> Juyang Weng
          [<a class="moz-txt-link-freetext" href="mailto:weng@cse.msu.edu">mailto:weng@cse.msu.edu</a>] <br>
          <b>Sent:</b> Saturday, October 22, 2011 4:33 PM<br>
          <b>To:</b> leonid<br>
          <b>Cc:</b> <a class="moz-txt-link-abbreviated" href="mailto:leonid@seas.harvard.edu">leonid@seas.harvard.edu</a>; bmilist<br>
          <b>Subject:</b> Re: BMI debate: Can we start to look at the
          brain-mind from the entire system point of view?<br>
        </font><br>
      </div>
      Leonid,<br>
      <br>
      Good question:  How logic emerges from illogical neural firings?<br>
      <br>
      Many neural net researchers have a background in electrical
      engineering or physics, but not much in computer science.  To
      understand how a new category of neural networks (DN) can do
      abstraction, one needs to be familiar with the <br>
      automata theory typically taught in computer science. 
      Mathematical logic (proposition logic, first order logic, second
      order logic, etc.) is useful for understanding what formal logic
      means. But mathematical logic is not sufficient to explain brain
      functions.  <br>
      Brain functions are not based on mathematical logic.  <br>
      <br>
      Why? <br>
        <br>
      -John<br>
      <br>
      <br>
      On 10/22/11 10:15 AM, leonid wrote:
      <blockquote cite="mid:8D6BCA6916CA463CA63BF96D152976E9@jung"
        type="cite">
        <meta name="GENERATOR" content="MSHTML 8.00.6001.19154">
        <div dir="ltr" class="OutlookMessageHeader" align="left"
          lang="en-us"><span class="890360614-22102011"><font
              color="#0000ff" face="Arial" size="2">John,</font></span></div>
        <div dir="ltr" class="OutlookMessageHeader" align="left"
          lang="en-us"><span class="890360614-22102011"></span> </div>
        <div dir="ltr" class="OutlookMessageHeader" align="left"
          lang="en-us"><span class="890360614-22102011"><font
              color="#0000ff" face="Arial" size="2">Thank you for the
              good words.</font></span></div>
        <div dir="ltr" class="OutlookMessageHeader" align="left"
          lang="en-us"><span class="890360614-22102011"></span> </div>
        <div dir="ltr" class="OutlookMessageHeader" align="left"
          lang="en-us"><span class="890360614-22102011"><font
              color="#0000ff" face="Arial" size="2">Michael,</font></span></div>
        <div dir="ltr" class="OutlookMessageHeader" align="left"
          lang="en-us"><span class="890360614-22102011"></span> </div>
        <div dir="ltr" class="OutlookMessageHeader" align="left"
          lang="en-us"><span class="890360614-22102011"><font
              color="#0000ff" face="Arial" size="2">When you were at MIT
              we were developing similar techniques. Now you rely on
              logic. The problem with logic is: How it emerges from
              illogical neural firings?</font></span></div>
        <div dir="ltr" class="OutlookMessageHeader" align="left"
          lang="en-us"><span class="890360614-22102011"></span> </div>
        <div dir="ltr" class="OutlookMessageHeader" align="left"
          lang="en-us">
          <div dir="ltr" class="OutlookMessageHeader" align="left"
            lang="en-us"><span class="890360614-22102011"><font
                color="#0000ff" face="Arial" size="2">We would be glad
                to get you involved.</font></span></div>
          <div dir="ltr" class="OutlookMessageHeader" align="left"
            lang="en-us"><span class="890360614-22102011"></span> </div>
          <div dir="ltr" class="OutlookMessageHeader" align="left"
            lang="en-us"><span class="890360614-22102011"><font
                color="#0000ff" face="Arial" size="2">Best</font></span></div>
          <div dir="ltr" class="OutlookMessageHeader" align="left"
            lang="en-us"><span class="890360614-22102011"><font
                color="#0000ff" face="Arial" size="2">Leonid</font></span></div>
        </div>
        <div dir="ltr" class="OutlookMessageHeader" align="left"
          lang="en-us"><span class="890360614-22102011"></span> </div>
        <div dir="ltr" class="OutlookMessageHeader" align="left"
          lang="en-us"><span class="890360614-22102011"></span> </div>
        <div dir="ltr" class="OutlookMessageHeader" align="left"
          lang="en-us">
          <hr tabindex="-1"> </div>
        <div dir="ltr" class="OutlookMessageHeader" align="left"
          lang="en-us"><font face="Tahoma" size="2"><b>From:</b> Juyang
            Weng [<a moz-do-not-send="true"
              class="moz-txt-link-freetext"
              href="mailto:weng@cse.msu.edu">mailto:weng@cse.msu.edu</a>]
            <br>
            <b>Sent:</b> Thursday, October 20, 2011 8:09 PM<br>
            <b>To:</b> leonid; <a moz-do-not-send="true"
              class="moz-txt-link-abbreviated"
              href="mailto:leonid@seas.harvard.edu">leonid@seas.harvard.edu</a><br>
            <b>Cc:</b> 'bmilist'; Michael I Jordan<br>
            <b>Subject:</b> Re: BMI debate: Can we start to look at the
            brain-mind from the entire system point of view?<br>
          </font><br>
        </div>
        Hi Leonid,<br>
        <br>
        You provided a great list of items.  I also agree that views
        from top and views from bottom and many levels in the middle are
        all needed. <br>
        <br>
        I provide a key divide between symbolic representations and
        emergent representations for us to start:<br>
        Michael I. Jordan correctly stated at the David Rumelhart
        Memorial Plenary Talk, IJCNN 2011 that neural networks do not
        abstract well.  He talked about symbolic models instead.<br>
        <br>
        (a) What did he mean by neural networks do not abstract well?<br>
        (b) Why did a researcher who has done neural nets before talked
        about instead symbolic models at a major neural network
        conference in honor of a neural network pioneer?<br>
        <br>
        I give him a CC just in case he is interested.<br>
        <br>
        -John<br>
        <br>
        On 10/20/11 4:45 PM, leonid wrote:
        <blockquote cite="mid:5B22AABFC3A044BB931CBE2FE81CBEBF@jung"
          type="cite">
          <meta name="GENERATOR" content="MSHTML 8.00.6001.19154">
          <div dir="ltr" align="left"><span class="546400720-20102011"><font
                color="#0000ff" face="Arial" size="2">Hello to everybody</font></span></div>
          <div dir="ltr" align="left"><span class="546400720-20102011"></span> </div>
          <div dir="ltr" align="left"><span class="546400720-20102011"><font
                color="#0000ff" face="Arial" size="2">Views from the top
                and views from the bottom must be combined. Physics is a
                successful science because it concentrates on
                fundamental principles, and then proceeds to
                experimentally verifiable predictions. There are first
                principles operating in the mind and brain. There are
                first principles at every level of organization of
                matter. For the brain-mind I would list few:</font></span></div>
          <div dir="ltr" align="left"><span class="546400720-20102011"></span> </div>
          <div dir="ltr" align="left"><span class="546400720-20102011"><font
                color="#0000ff" face="Arial" size="2">- hierarchical
                organization of mental representations</font></span></div>
          <div dir="ltr" align="left"><span class="546400720-20102011"><font
                color="#0000ff" face="Arial" size="2">- bottom-up and
                top-down signal interactions</font></span></div>
          <div dir="ltr" align="left"><span class="546400720-20102011"><font
                color="#0000ff" face="Arial" size="2">- instinctual
                drives measuring vital organismic parameters and
                communicating results to decision-making mechanisms</font></span></div>
          <div dir="ltr" align="left"><span class="546400720-20102011"><font
                color="#0000ff" face="Arial" size="2">- emotions serving
                as neural signals communicating (as above) satisfaction
                or dissatisfaction of instinctual drives</font></span></div>
          <div dir="ltr" align="left"><span class="546400720-20102011"><font
                color="#0000ff" face="Arial" size="2">(the two
                principles above are discovered by Grossberg-Levine
                theory)</font></span></div>
          <div><span class="546400720-20102011"></span><font
              face="Arial"><font color="#0000ff"><font size="2">-<span
                    class="546400720-20102011"> </span>the most important
                  instinctual drive is<span class="546400720-20102011">
                    the "instinct for knowledge." It drives matching of
                    bottom-up and top-down signals so that mental
                    representations are similar to reality (it is more
                    important than</span> survival or procreation<span
                    class="546400720-20102011">, because survival is not
                    possible without perception and cognition)</span></font></font></font></div>
          <div><font face="Arial"><font color="#0000ff"><font size="2"><span
                    class="546400720-20102011">- "vague-to-crisp"
                    process evolves mental representations to match
                    reality, this is the operation of the instinct for
                    knowledge, which overcomes exponential complexity</span></font></font></font></div>
          <div><font face="Arial"><font color="#0000ff"><font size="2"><span
                    class="546400720-20102011">- special emotions
                    correspond to the knowledge instinct; these are not
                    basic, but aesthetic emotions explaining higher
                    human cognitive abilities from understanding and
                    cognition of objects an situations to abstract
                    concepts, and higher up to "mysterious" meanings of
                    life and emotions of the beautiful</span></font></font></font></div>
          <div><font face="Arial"><font color="#0000ff"><font size="2"><span
                    class="546400720-20102011">- we need to understand
                    the difference between language and cognitive
                    representations, and how they interact</span></font></font></font></div>
          <div><font face="Arial"><font color="#0000ff"><font size="2"><span
                    class="546400720-20102011">- how the hierarchy of
                    cognition is learned by every human child</span></font></font></font></div>
          <div><font face="Arial"><font color="#0000ff"><font size="2"><span
                    class="546400720-20102011">- what are emotions of
                    cognitive dissonances, and how human evolution
                    overcame these (most likely - music)</span></font></font></font></div>
          <div><font face="Arial"><font color="#0000ff"><font size="2"><span
                    class="546400720-20102011">- some of the above are
                    described by mathematical models - this is a must</span></font></font></font></div>
          <div><font face="Arial"><font color="#0000ff"><font size="2"><span
                    class="546400720-20102011">- some of the above is
                    experimentally confirmed - this is a must</span></font></font></font></div>
          <div><font face="Arial"><font color="#0000ff"><font size="2"><span
                    class="546400720-20102011"></span></font></font></font> </div>
          <div><font face="Arial"><font color="#0000ff"><font size="2"><span
                    class="546400720-20102011">What did I miss ?
                    (possibly something) - please add fundamental laws,
                    explaining a lot from few assumptions, mathematical
                    models of these processes making experimental
                    predictions, and finally experimental tests of all
                    of the above.</span></font></font></font></div>
          <div><font face="Arial"><font color="#0000ff"><font size="2"><span
                    class="546400720-20102011"></span></font></font></font> </div>
          <div><font face="Arial"><font color="#0000ff"><font size="2"><span
                    class="546400720-20102011">Best</span></font></font></font></div>
          <div><font face="Arial"><font color="#0000ff"><font size="2"><span
                    class="546400720-20102011">Leonid</span></font></font></font></div>
          <div> </div>
          <div> </div>
          <div><font face="Arial"><font color="#0000ff"><font size="2"><span
                    class="546400720-20102011"></span></font></font></font> </div>
          <div><br>
          </div>
          <div dir="ltr" class="OutlookMessageHeader" align="left"
            lang="en-us">
            <hr tabindex="-1"> <font face="Tahoma" size="2"><b>From:</b>
              Juyang Weng [<a class="moz-txt-link-freetext"
                href="mailto:weng@cse.msu.edu" moz-do-not-send="true">mailto:weng@cse.msu.edu</a>]
              <br>
              <b>Sent:</b> Thursday, October 20, 2011 2:08 PM<br>
              <b>To:</b> bmilist<br>
              <b>Subject:</b> BMI debate: Can we start to look at the
              brain-mind from the entire system point of view?<br>
            </font><br>
          </div>
          Dear all: <br>
          <br>
          After talking to some of my colleagues, we here kick of a BMI
          debate via this email on <a class="moz-txt-link-abbreviated"
            href="mailto:bmi@lists.cse.msu.edu" moz-do-not-send="true">bmi@lists.cse.msu.edu</a>.<br>
          Many of you on this anonymous list told me that they are
          interested and want to be posted.  However, we will use this<br>
          anonymous list sparely.   If you want to keep posted about
          this debate and other BMI activities, sign on bmi mailing list
          <br>
          at <a class="moz-txt-link-freetext"
            href="http://lists.cse.msu.edu/cgi-bin/mailman/listinfo/bmi"
            moz-do-not-send="true">http://lists.cse.msu.edu/cgi-bin/mailman/listinfo/bmi</a>
          or simply Google it with key words like "BMI mailing list
          MSU".<br>
          Once you receive email from the mailing list, you can post
          simply via reply.   BMI mailing list is a moderated list to
          avoid<br>
          unrelated emails.  If there are sufficient interest, BMI might
          host a live web debate in a few weeks.  Post your views!<br>
          <br>
          The following email I sent to Dave Touretzky is the kick-off
          for the BMI debates.  I will provide some interesting examples
          soon.<br>
          <br>
          On 10/20/11 12:59 PM, Juyang Weng wrote:
          <blockquote cite="mid:4EA05385.5050906@cse.msu.edu"
            type="cite">Hi Dave,<br>
            <br>
            I read some of your papers about hippocampus, which are very
            interesting.  Let me inject some basic but probably very
            controversial ideas you probably will reject.  If you do not
            mind, I will post this discussion to the BMI mailing list.  
            The main purpose is to attract more talented researchers to
            this important brain-mind subject.   <br>
            <br>
            How about looking at the brain from a top system point of
            view?  I believe that top (but detailed) theory is powerful,
            since the brain basically does signal processing (not in the
            traditional sense).   Maybe with this view, our future
            design of experiments could be  more productive?  Let me
            start from one example:<br>
            <br>
            One of your papers is "Synaptic Learning Models of Map
            Separation in the Hippocampus", <i>Neurocomputing</i>, <b>32</b>:379-384,
            2000.   The co-authors wrote: "If the perforant path
            projection to CA3 functions as a pattern completion
            mechanism, and the DG projection via the mossy fibers
            performs pattern separation (O'Reilly and McClelland, 1994),
            then ..."<br>
            <br>
            My new perspectives about the brain benefited from such
            local views, but I think that such local views can also
            benefit from the entire brain-mind point of view, in the
            sense of a giant Finite Automaton (FA).   This brain FA is
            not handcrafted, but rather developed, since all phenotypes
            emerge from a single cell (zygote).   So, I model such a
            developmental FA as the Developmental Network (DN).  Then,
            the Hippocampus is simply a very small part of a giant DN. 
            According to how the DN works, I predict the following:  If
            we focus on a small part (e.g., Hippocampus) of this DN, we
            definitely will get hopelessly lost, like a hiker in a
            forest without a global map.   He can see some local
            phenomena from where he stands, but he did not see the
            entire forest.  <br>
            <br>
            Focused, per-phenomenon discoveries have been prevailing in
            the brain science literature in the modern science, with few
            exceptions (Charles Darwin is one).  This is probably
            because only such papers can be accepted and funded in the
            modern time.  Although those phenomena are useful, they are
            piece meals.  Now, there seem to have enough pieces to put
            the grand puzzle together.  I have established what a DN can
            do in real time, by modeling the brain-mind from the entire
            FA (DN) point of view.  Since all pieces of DN seem to fit
            what we know about the brain science, the brain should not
            be less efficient than a DN.<br>
            <br>
            You can say that this is just fantasy, but I have a series
            of rigorous proofs. <br>
            <br>
            Daniel M. Wolpert said at SfN 2009 that the over 1400-page
            long volume of "Principles of Neural Science" by Kandel et
            al. could be much condensed if we could model the entire
            brain in computational theory.   I hope that the DN theory
            can help that condensing process. <br>
            <br>
            A major infrastructure problem is that what I talked about
            above spans at least 6 disciplines.   Meaningful
            conversations are extremely difficult.  If you feel angry or
            insulted by my above text, I feel that it is partially
            because of this huge divide. <br>
            <br>
            I am giving a CC to Jay, as his work was cited. <br>
            <br>
            Best regards,<br>
            <br>
            -John</blockquote>
          <br>
          -John<br>
          <pre class="moz-signature" cols="72">-- 
--
Juyang (John) Weng, Professor
Department of Computer Science and Engineering
MSU Cognitive Science Program and MSU Neuroscience Program
3115 Engineering Building
Michigan State University
East Lansing, MI 48824 USA
Tel: 517-353-4388
Fax: 517-432-1061
Email: <a class="moz-txt-link-abbreviated" href="mailto:weng@cse.msu.edu" moz-do-not-send="true">weng@cse.msu.edu</a>
URL: <a class="moz-txt-link-freetext" href="http://www.cse.msu.edu/%7Eweng/" moz-do-not-send="true">http://www.cse.msu.edu/~weng/</a>
----------------------------------------------

</pre>
          <hr noshade="noshade" size="1">
          <p class="avgcert" color="#000000" align="left">No virus found
            in this message.<br>
            Checked by AVG - <a href="http://www.avg.com"
              moz-do-not-send="true">www.avg.com</a><br>
            Version: 10.0.1411 / Virus Database: 1522/3963 - Release
            Date: 10/20/11</p>
        </blockquote>
        <br>
        <pre class="moz-signature" cols="72">-- 
--
Juyang (John) Weng, Professor
Department of Computer Science and Engineering
MSU Cognitive Science Program and MSU Neuroscience Program
3115 Engineering Building
Michigan State University
East Lansing, MI 48824 USA
Tel: 517-353-4388
Fax: 517-432-1061
Email: <a class="moz-txt-link-abbreviated" href="mailto:weng@cse.msu.edu" moz-do-not-send="true">weng@cse.msu.edu</a>
URL: <a class="moz-txt-link-freetext" href="http://www.cse.msu.edu/%7Eweng/" moz-do-not-send="true">http://www.cse.msu.edu/~weng/</a>
----------------------------------------------

</pre>
        <hr noshade="noshade" size="1">
        <p class="avgcert" color="#000000" align="left">No virus found
          in this message.<br>
          Checked by AVG - <a href="http://www.avg.com"
            moz-do-not-send="true">www.avg.com</a><br>
          Version: 10.0.1411 / Virus Database: 1522/3963 - Release Date:
          10/20/11</p>
      </blockquote>
      <br>
      <pre class="moz-signature" cols="72">-- 
--
Juyang (John) Weng, Professor
Department of Computer Science and Engineering
MSU Cognitive Science Program and MSU Neuroscience Program
3115 Engineering Building
Michigan State University
East Lansing, MI 48824 USA
Tel: 517-353-4388
Fax: 517-432-1061
Email: <a moz-do-not-send="true" class="moz-txt-link-abbreviated" href="mailto:weng@cse.msu.edu">weng@cse.msu.edu</a>
URL: <a moz-do-not-send="true" class="moz-txt-link-freetext" href="http://www.cse.msu.edu/%7Eweng/">http://www.cse.msu.edu/~weng/</a>
----------------------------------------------

</pre>
      <hr noshade="noshade" size="1">
      <p class="avgcert" color="#000000" align="left">No virus found in
        this message.<br>
        Checked by AVG - <a moz-do-not-send="true"
          href="http://www.avg.com">www.avg.com</a><br>
        Version: 10.0.1411 / Virus Database: 1522/3967 - Release Date:
        10/22/11</p>
    </blockquote>
    <br>
    <pre class="moz-signature" cols="72">-- 
--
Juyang (John) Weng, Professor
Department of Computer Science and Engineering
MSU Cognitive Science Program and MSU Neuroscience Program
3115 Engineering Building
Michigan State University
East Lansing, MI 48824 USA
Tel: 517-353-4388
Fax: 517-432-1061
Email: <a class="moz-txt-link-abbreviated" href="mailto:weng@cse.msu.edu">weng@cse.msu.edu</a>
URL: <a class="moz-txt-link-freetext" href="http://www.cse.msu.edu/~weng/">http://www.cse.msu.edu/~weng/</a>
----------------------------------------------

</pre>
  </body>
</html>