[CDSNL] False "Great Leap Forward" in AI
Juyang Weng
juyang.weng at gmail.com
Thu Jul 4 11:13:40 EDT 2024
Dear Asim and All,
I am happy that Asim responded so that he gave us all an opportunity to
interactively participate in an academic discussion. We can defeat the
false "Great Leap Forward".
During the banquet of July 3, 2024, I was trying to explain to Asim why
our Developmental Network (DN) only trains a single network, not multiple
networks as all other methods do (e.g., neural networks with
error-backprop, genetic algorithms, and fuzzy sets). (Let me know if there
are other methods where one network is optimal and therefore is free from
the local minima problem.)
This single-network property is important because normally every
developmental network (genome) must succeed in single-network development,
from inception to birth, to death.
Post-selection: A human programer trains multiple (n>1) predictors
based on a fit set F, and then picks up the luckiest predictor based on a
validation set (which is in the possession of the program). He suffers from
the following two misconducts:
Misconduct 1: Cheating in the absence of a test (because the test set
T is absent).
Misconduct 2: Hiding bad-looking data (other less lucky predictors).
A. I told Asim that DN tests its performance from birth to death,
across the entire life!
B. I told Asim that DN does not hide any data because it trains a
single brain and reports all its lifetime errors!
Asim did not read our DN papers that I sent to him, or did not read
them carefully, especially the proof of the maximum likelihood of DN-1.
See Weng IJIS 2015,
https://www.scirp.org/journal/paperinformation?paperid=53728.
At the banquet, I told Asim that the representation of DN is
"distributed" like the brain and it collectively computes the maximum
likelihood representation by very neuron using a limited resource and a
limited amount of life experience. I told him that every brain is
optimal, including his brain, my brain, and Aldolf Hitler's brain.
However, every brain has a different experience. However, Asim apparently
did not understand me and did not continue to ask what I meant by
"distributed" maximum likelihood representation. Namely, every neuron
incrementally computes the maximum likelihood representation of its own
competition zone.
Asim gave an expression about the maximum likelihood implying that
every nonlinear objective function has many local minima! That seems to
be a lack of understanding of my proof in IJIS 2015.
(1) I told Asim that every (positive) neuron computes its competitors
automatically (assisted by its dedicated negative neuron), so that every
(positive) neuron has a different set of (positive) neuronal competitors.
Because every neuron has a different competition zone, the maximum
likelihood representation is distributed.
(2) Through the distributed computing by all (limited number of)
neurons that work together inside the DN, the DN computes the distributed
maximum likelihood representations. Namely, every (positive) neuron
computes its maximum likelihood representation incrementally for its unique
competition zone. This is proven in IJIS 2015, based on the
dual-optimality of Lobe Component Analysis. Through the proof, you can
see how LCA converts a highly nonlinear problem for each neuron into a
linear problem for each neuron, by defining observation as a
response-weighted input (i.e., dually-optimal Hebbian learning). Yes, with
this beautifully converted linear problem (inspired by the brain), neuronal
computation becomes computing an incremental mean through time in every
neuron. Therefore, a highly nonlinear problem of computing lobe components
becomes a linear one. We know that there is no local minima problem in
computing the mean of a time sequence.
(3) As I presented in several of my IJCNN tutorials, neurons in DN
start from random weights, but different random weights lead to the same
network, because the initial weights only change the neuronal resources,
but not the resulting network.
In summary, the equation that Asim listed is for each neuron, but each
neuron has a different instance of the expression. There is no search,
not that Asim implied (without saying)! This corresponds to a holistic
solution the 20-million dollar problems (i.e., the local minuma problem
solved by the maximum-likelihood optimality). See
https://ieeexplore.ieee.org/document/9892445
However, all other learning algorithms have not solved this local
minima problem. Therefore, they have to resort to trials and errors
through training many predictors.
Do you have any more questions?
Best regards,
-John
On Thu, Jul 4, 2024 at 4:20 PM Asim Roy <ASIM.ROY at asu.edu> wrote:
> Dear All,
>
>
>
> There’s quite a bit of dishonesty here. John Weng can be accused of the
> same “misconduct” that he is accusing others of. He didn’t quite disclose
> what we discussed at the banquet last night. He is hiding all that.
>
>
>
> His basic argument is that we pick the best solution and report results on
> that basis. In a sense, when you formulate a machine learning problem as an
> optimization problem, that’s essentially what you are trying to do – get
> the best solution and weed out the bad ones. And HE DOES THE SAME IN HIS
> DEVELOPMENT NETWORK. When I asked him how his DN algorithm learns, he said
> it uses the maximum likelihood method, which is an old statistical method (Maximum
> likelihood estimation - Wikipedia
> <https://en.wikipedia.org/wiki/Maximum_likelihood_estimation>). I quote
> from Wikipedia:
>
>
>
> The goal of maximum likelihood estimation is to find the values of the
> model parameters that *maximize the likelihood function over the
> parameter space*,[6]
> <https://en.wikipedia.org/wiki/Maximum_likelihood_estimation#cite_note-:0-6> that
> is
>
> 𝜃^=argmax𝜃∈Θ𝐿𝑛(𝜃;𝑦) .[image: {\displaystyle {\hat {\theta
> }}={\underset {\theta \in \Theta }{\operatorname {arg\;max} }}\,{\mathcal
> {L}}_{n}(\theta \,;\mathbf {y} )~.}]
>
>
>
> So, by default, HE ALSO HIDES ALL THE BAD SOLUTIONS AND DOESN’T REPORT
> THEM. He never talks about all of this. He never mentions that I had talked
> about this in particular.
>
>
>
> I would suggest that based on his dishonest accusations against others
> and, in particular, against one of the plenary speakers here at the
> conference, that IEEE take some action against him. This nonsense has been
> going on for a longtime and it’s time for some action.
>
>
>
> By the way, I am not a member of IEEE. I am expressing my opinion only
> because he has falsely accused me also and I had enough of it. I have added
> Danilo Mandic and Irwin King to the list.
>
>
>
> Thanks,
>
> Asim Roy
>
> Professor, Information Systems
>
> Arizona State University
>
> Asim Roy | ASU Search <https://search.asu.edu/profile/9973>
>
> Lifeboat Foundation Bios: Professor Asim Roy
> <https://lifeboat.com/ex/bios.asim.roy>
>
>
>
> *From:* Juyang Weng <juyang.weng at gmail.com>
> *Sent:* Wednesday, July 3, 2024 5:54 PM
> *To:* Russell T. Harrison <r.t.harrison at ieee.org>
> *Cc:* Akira Horose <ahirose at ee.t.u-tokyo.ac.jp>; Hisao Ishibuchi <
> hisao at sustech.edu.cn>; Simon See <ssee at nvidia.com>; Kenji Doya <
> doya at oist.jp>; Robert Kozma <rkozma55 at gmail.com>; Simon See <
> Simon.CW.See at gmail.com>; Yaochu Jin <Yaochu.Jin at surrey.ac.uk>; Xin Yao <
> xiny at sustech.edu.cn>; Asim Roy <ASIM.ROY at asu.edu>; amdnl at lists.cse.msu.edu
> *Subject:* False "Great Leap Forward" in AI
>
>
>
> Dear Asim,
>
> It is my great pleasure to finally have somebody who argued with me
> about this important subject. I have attached the summary of this
> important issue in pdf.
>
> I alleged widespread false data in AI from the following two
> misconducts:
> Misconduct 1: Cheating in the absence of a test.
>
> Misconduct 2: Hiding bad-looking data.
>
> The following is a series of events during WCCI 2024 in Yokohama Japan.
>
> These examples showed that some active researchers in the WCCI community
> were probably not aware of the severity and urgency of the issue.
> July 1, in public eyes, Robert Cozma banned the chance for Simon See at
> NVidea to respond to my question pointing to a False "Great Leap Forward"
> in AI.
> July 1, Kenji Doya suggested something like "let misconduct go ahead
> without a correction" because the publications are not cited. But he still
> did not know that I alleged that AlphaFold as well as many almost all
> published Google's deep learning products suffer from the same
> Post-Selection misconduct.
> July 1, Asim Roy said to me "We need to talk" but he did not stay
> around to talk. I had a long debate during the Banquet last night. He
> seems to imply that post-selections of few networks and hiding the
> performance information of the entire population is "survival of the
> fittest". He did not seem to agree that all 3 billion human populations
> need to be taken into account in human evolution, at least a large number
> of samples like in human sensus.
> July 3, Yaochu Jin did not let me ask questions after a keynote talk.
> Later he seemed to admit that many people in AI only report the data they
> like.
>
> July 3, Kalanmoy Deb said that he just wanted to find a solution using
> genetic algorithms but did not know that his so-called solution did not
> have a test at all.
>
> July 1, I saw all books on the display on the Springer Table appear to
> suffer from Post-Selection misconduct.
>
> Do we have a false data flooded "Great Leap Forward" in AI? Why?
>
> I welcome all those interested to discuss this important issue.
> Best regards,
> -John Weng
> --
>
> Juyang (John) Weng
>
--
Juyang (John) Weng
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.cse.msu.edu/pipermail/amdnl/attachments/20240705/6ede230c/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image001.png
Type: image/png
Size: 183 bytes
Desc: not available
URL: <http://lists.cse.msu.edu/pipermail/amdnl/attachments/20240705/6ede230c/attachment-0001.png>
More information about the Amdnl
mailing list