[Amdnl] Fwd: Connectionists: The symbolist quagmire

Juyang Weng juyang.weng at gmail.com
Tue Jun 21 18:15:33 EDT 2022


---------- Forwarded message ---------
From: Juyang Weng <juyang.weng at gmail.com>
Date: Tue, Jun 21, 2022 at 6:15 PM
Subject: Re: Connectionists: The symbolist quagmire
To: Gary Marcus <gary.marcus at nyu.edu>
Cc: Post Connectionists <connectionists at mailman.srv.cs.cmu.edu>


Dear Gary:
You wrote:  consciousness is not a requirement for "common sense or natural
language understanding".  I respectfully do not agree.
If you read my paper, you would see that consciousness is required by not
only "common sense or natural language understanding" but also examples
that do not need a natural language like driverless cars.
All living and dead objects are doing "natural sign languages" in
our environment.   Consciousness corresponds to "contexts that are larger
and higher than an immediate context".
If an immediate context is an ASCII symbol in your design document, your
machine cannot derive "larger and higher contexts" from the symbol because
the machine does not read your design document.
What is a larger and higher context?  For example, when I follow a lane, I
must have the consciousness beyond the lane following, like the following
larger and higher contexts.
(a) Am I doing well?
(b) Did I do wrong?  If I did wrong, what should I do?
(c) How do I improve?
Without consciousness of (a), (b) and (c), the learner is brittle, like all
deep learning algorithms (including AlphaFold) that only do data fitting.
That is why all deep learning projects must do data deletion, deletion of
undesirable data.
Read my report to Nature:
Data Deletions in AI Papers in *Nature* since 2015 and the Appropriate
Protocol,
<http://www.cse.msu.edu/~weng/research/2021-06-28-Report-to-Nature-specific-PSUTS.pdf>June
28, 2021.
-John




On Tue, Jun 21, 2022 at 5:19 PM Gary Marcus <gary.marcus at nyu.edu> wrote:

> not that i really know what consciousness is, but i doubt that it is
> requirement for any of the challenges i have raised, eg with respect to
> common sense or natural language understanding.
>
> systems like AlphaFold and turn-by-turn directions presumably lack
> consciousness but give us perfectly reasonable answers using symbolic
> inputs. I don’t see why more general forms of AI need to be different,
> though they undoubtedly will require richer representations than are
> currently trendy.
>
> On Jun 21, 2022, at 2:14 PM, Juyang Weng <juyang.weng at gmail.com> wrote:
>
> 
> Dear Gary,
>
> You wrote: "My own view is that arguments around symbols per se are not
> very productive, and that the more interesting questions center around what
> you *do* with symbols once you have them.  If you take symbols to be
> patterns of information that stand for other things, like ASCII encodings,
> or individual bits for features (e.g. On or Off for a thermostat state),
> then practically every computational model anywhere on the spectrum makes
> use of symbols. For example the inputs and outputs (perhaps after a
> winner-take-all operation or somesuch) of typical neural networks are
> symbols in this sense, standing for things like individual words,
> characters, directions on a joystick etc."
>
> I respectfully do not agree, since that is why "practically every
> computational model anywhere" cannot learn consciousness.  They are
> basically pattern recognition machines for a specific task.
>
> I  skip "data selection" in deep learning here.   Deep learning not
> only hits a wall.  All its published data appear to be invalid.
>
> Gary, this issue is probably too fundamental if you do not try to
> understand the conscious learning algorithm (see below), first ever in the
> world, as far as I humbly aware of.
>
> Let me try in intuitive terms:
>
> (1) You have a series of ASCII symbols, e.g., ASCII-1, ASCII-2, ASCII-3,
> ASCII-4 ...  You have 1 million such ASCII symbols.  Any number, as long as
> it is a large number.
>
> (2) You specify the meanings of such ASCII symbols in your design
> documents:
>  ASCII-1: forward-move-of-joystick-A,
> ASCII-2: backward-move-of-joystick-A,
> ASCII-3:left-move-of-joystick-A,
> ASCII-4: right-move-of-joystick-A
> ...
> You have at  least 1 millions of lines.
>
> (3) Your machine does not read your design document in (2), they cannot
> think about your design document in (2).  They only learn the mapping from
> sensory inputs to one of these ASCII symbols.
>
> (4) Therefore, your machine is not able to understand the consciousness
> that is required to judge that it is doing a joystick work (e.g., driving
> using a joystick) well, because your knowledge hierarchy (using these 1
> million symbols) are static.  The machine cannot recompose new meanings
> from these symbols, because it does not understand any symbols at all!  Why
> do I understand my moving forward?   I do not have (2).  Moving forward is
> my own intent, my own volition!  I feel the effects of my volition and
> decide whether I want to repeat.
>
> (5) Without consciousness, machine learning is static.   Consciousness
> must go beyond any static hierarchy.
> (a) My children do.  They told me some views (and intents) that surprise
> me.  I did not teach such views.
> (b) That is also why a human brain can do research.  My subject research
> surprised my father-in-law and he does not believe I can do what I told him
> I can.
>
> In summary, all ASCII symbols are a dead end.  They like AI drugs, are
> addictive, and waste our resources in AI.
>
> As the first ever conscious learning algorithm, the DN-3 neural network
> must autonomously create any fluid hierarchy that any consciousness
> requires during human-like thinking.
> Please read the first conscious learning algorithm that will be able to do
> scientific research in the future:
>
> Peer reviewed version:
>
> @INPROCEEDINGS{WengCLAIEE22
>
> ,AUTHOR= "J. Weng"
>
> ,TITLE= "An Algorithmic Theory of Conscious Learning"
>
> ,BOOKTITLE= "2022 3rd Int'l Conf. on Artificial Intelligence in
> Electronics Engineering"
>
> ,ADDRESS= "Bangkok, Thailand"
>
> ,PAGES= "1-10"
>
> ,MONTH= "Jan. 11-13"
>
> ,YEAR= "2022"
>
> ,NOTE="\url{
> http://www.cse.msu.edu/~weng/research/ConsciousLearning-AIEE22rvsd-cite.pdf
> <https://urldefense.com/v3/__http://www.cse.msu.edu/*weng/research/ConsciousLearning-AIEE22rvsd-cite.pdf__;fg!!BhJSzQqDqA!UavIWCT1Jd7u9ReWK-A7KAUlxyP0HNqi44mp--paz7YYkLuN404bT914FodGBojK2z0Cl6doVaoxsVY3vSJiP4w$>
> }"
>
> }
>
>
> Not yet peer reviewed:
>
> @misc{WengDN3-RS22
>
> ,AUTHOR= "J. Weng"
>
> ,TITLE= "A Developmental Network Model of Conscious Learning in Biological
> Brains"
>
> ,Howpublished= "Research Square"
>
> ,PAGES= "1-32"
>
> ,MONTH= "June 7"
>
> ,YEAR= "2022"
>
> ,NOTE="doi: \url{https://doi.org/10.21203/rs.3.rs-1700782/v2
> <https://urldefense.com/v3/__https://doi.org/10.21203/rs.3.rs-1700782/v2__;!!BhJSzQqDqA!UavIWCT1Jd7u9ReWK-A7KAUlxyP0HNqi44mp--paz7YYkLuN404bT914FodGBojK2z0Cl6doVaoxsVY3NWQZChU$>},
> desk-rejected by {\em Nature}, {\em Science}, {\em PNAS}, {\em Neural
> Networks} and {\em ArXiv}"
>
> }
>
>
> Please kindly read them, get excited and ask questions.
>
>
> Best regards,
>
> -John
> --
> Juyang (John) Weng
>
>

-- 
Juyang (John) Weng


-- 
Juyang (John) Weng
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.cse.msu.edu/pipermail/amdnl/attachments/20220621/569707de/attachment-0001.html>


More information about the Amdnl mailing list