<div dir="ltr">Dear Asim and All,<div> I am happy that Asim responded so that he gave us all an opportunity to interactively participate in an academic discussion. We can defeat the false "Great Leap Forward".</div><div> During the banquet of July 3, 2024, I was trying to explain to Asim why our Developmental Network (DN) only trains a single network, not multiple networks as all other methods do (e.g., neural networks with error-backprop, genetic algorithms, and fuzzy sets). (Let me know if there are other methods where one network is optimal and therefore is free from the local minima problem.)</div><div> This single-network property is important because normally every developmental network (genome) must succeed in single-network development, from inception to birth, to death. </div><div> Post-selection: A human programer trains multiple (n>1) predictors based on a fit set F, and then picks up the luckiest predictor based on a validation set (which is in the possession of the program). He suffers from the following two misconducts:<br> Misconduct 1: Cheating in the absence of a test (because the test set T is absent).</div><div> Misconduct 2: Hiding bad-looking data (other less lucky predictors).</div><div> A. I told Asim that DN tests its performance from birth to death, across the entire life!</div><div> B. I told Asim that DN does not hide any data because it trains a single brain and reports all its lifetime errors! </div><div> Asim did not read our DN papers that I sent to him, or did not read them carefully, especially the proof of the maximum likelihood of DN-1. See Weng IJIS 2015, <a href="https://www.scirp.org/journal/paperinformation?paperid=53728">https://www.scirp.org/journal/paperinformation?paperid=53728</a>.</div><div> At the banquet, I told Asim that the representation of DN is "distributed" like the brain and it collectively computes the maximum likelihood representation by very neuron using a limited resource and a limited amount of life experience. I told him that every brain is optimal, including his brain, my brain, and Aldolf Hitler's brain. However, every brain has a different experience. However, Asim apparently did not understand me and did not continue to ask what I meant by "distributed" maximum likelihood representation. Namely, every neuron incrementally computes the maximum likelihood representation of its own competition zone.</div><div> Asim gave an expression about the maximum likelihood implying that every nonlinear objective function has many local minima! That seems to be a lack of understanding of my proof in IJIS 2015.</div><div> (1) I told Asim that every (positive) neuron computes its competitors automatically (assisted by its dedicated negative neuron), so that every (positive) neuron has a different set of (positive) neuronal competitors. Because every neuron has a different competition zone, the maximum likelihood representation is distributed. </div><div> (2) Through the distributed computing by all (limited number of) neurons that work together inside the DN, the DN computes the distributed maximum likelihood representations. Namely, every (positive) neuron computes its maximum likelihood representation incrementally for its unique competition zone. This is proven in IJIS 2015, based on the dual-optimality of Lobe Component Analysis. Through the proof, you can see how LCA converts a highly nonlinear problem for each neuron into a linear problem for each neuron, by defining observation as a response-weighted input (i.e., dually-optimal Hebbian learning). Yes, with this beautifully converted linear problem (inspired by the brain), neuronal computation becomes computing an incremental mean through time in every neuron. Therefore, a highly nonlinear problem of computing lobe components becomes a linear one. We know that there is no local minima problem in computing the mean of a time sequence. </div><div> (3) As I presented in several of my IJCNN tutorials, neurons in DN start from random weights, but different random weights lead to the same network, because the initial weights only change the neuronal resources, but not the resulting network.</div><div> In summary, the equation that Asim listed is for each neuron, but each neuron has a different instance of the expression. There is no search, not that Asim implied (without saying)! This corresponds to a holistic solution the 20-million dollar problems (i.e., the local minuma problem solved by the maximum-likelihood optimality). See <a href="https://ieeexplore.ieee.org/document/9892445">https://ieeexplore.ieee.org/document/9892445</a></div><div> However, all other learning algorithms have not solved this local minima problem. Therefore, they have to resort to trials and errors through training many predictors.</div><div> Do you have any more questions?<br> Best regards,</div><div>-John</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Jul 4, 2024 at 4:20 PM Asim Roy <<a href="mailto:ASIM.ROY@asu.edu">ASIM.ROY@asu.edu</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div class="msg4096885990403298170">
<div lang="EN-US" style="overflow-wrap: break-word;">
<div class="m_4096885990403298170WordSection1">
<p class="MsoNormal"><span style="font-size:11pt">Dear All,<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:11pt"><u></u> <u></u></span></p>
<p class="MsoNormal"><span style="font-size:11pt">There’s quite a bit of dishonesty here. John Weng can be accused of the same “misconduct” that he is accusing others of. He didn’t quite disclose what we discussed at the banquet last night. He is hiding all
that. <u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:11pt"><u></u> <u></u></span></p>
<p class="MsoNormal"><span style="font-size:11pt">His basic argument is that we pick the best solution and report results on that basis. In a sense, when you formulate a machine learning problem as an optimization problem, that’s essentially what you are
trying to do – get the best solution and weed out the bad ones. And HE DOES THE SAME IN HIS DEVELOPMENT NETWORK. When I asked him how his DN algorithm learns, he said it uses the maximum likelihood method, which is an old statistical method (</span><a href="https://en.wikipedia.org/wiki/Maximum_likelihood_estimation" target="_blank">Maximum
likelihood estimation - Wikipedia</a>). I quote from Wikipedia:<u></u><u></u></p>
<p class="MsoNormal"><u></u> <u></u></p>
<p style="margin:0in;background:white"><span style="font-family:Arial,sans-serif;color:rgb(32,33,34);background:yellow">The goal of maximum likelihood estimation is to find the values of the model parameters that
<b><u>maximize the likelihood function over the parameter space</u></b>,</span><sup id="m_4096885990403298170cite_ref-:0_6-0"><span style="font-size:9.5pt;font-family:Arial,sans-serif;color:rgb(32,33,34);background:yellow"><a href="https://en.wikipedia.org/wiki/Maximum_likelihood_estimation#cite_note-:0-6" target="_blank"><span style="text-decoration:none">[6]</span></a></span></sup><span style="font-family:Arial,sans-serif;color:rgb(32,33,34);background:yellow"> that
is<u></u><u></u></span></p>
<p class="MsoNormal" style="margin-left:0.5in;background:white"><span class="m_4096885990403298170mwe-math-mathml-inline"><span style="font-size:14pt;font-family:"Cambria Math",serif;color:rgb(32,33,34);background:yellow">𝜃</span></span><span class="m_4096885990403298170mwe-math-mathml-inline"><span style="font-size:14pt;font-family:Arial,sans-serif;color:rgb(32,33,34);background:yellow">^=argmax</span></span><span class="m_4096885990403298170mwe-math-mathml-inline"><span style="font-size:14pt;font-family:"Cambria Math",serif;color:rgb(32,33,34);background:yellow">𝜃∈</span></span><span class="m_4096885990403298170mwe-math-mathml-inline"><span style="font-size:14pt;font-family:Arial,sans-serif;color:rgb(32,33,34);background:yellow">Θ</span></span><span class="m_4096885990403298170mwe-math-mathml-inline"><span style="font-size:14pt;font-family:"Cambria Math",serif;color:rgb(32,33,34);background:yellow">𝐿𝑛</span></span><span class="m_4096885990403298170mwe-math-mathml-inline"><span style="font-size:14pt;font-family:Arial,sans-serif;color:rgb(32,33,34);background:yellow">(</span></span><span class="m_4096885990403298170mwe-math-mathml-inline"><span style="font-size:14pt;font-family:"Cambria Math",serif;color:rgb(32,33,34);background:yellow">𝜃</span></span><span class="m_4096885990403298170mwe-math-mathml-inline"><span style="font-size:14pt;font-family:Arial,sans-serif;color:rgb(32,33,34);background:yellow">;</span></span><span class="m_4096885990403298170mwe-math-mathml-inline"><span style="font-size:14pt;font-family:"Cambria Math",serif;color:rgb(32,33,34);background:yellow">𝑦</span></span><span class="m_4096885990403298170mwe-math-mathml-inline"><span style="font-size:14pt;font-family:Arial,sans-serif;color:rgb(32,33,34);background:yellow">) .</span></span><span style="color:black"><u></u><img width="32" height="32" style="width: 0.3333in; height: 0.3333in;" src="cid:ii_1907e0ee3354cff311" alt="{\displaystyle {\hat {\theta }}={\underset {\theta \in \Theta }{\operatorname {arg\;max} }}\,{\mathcal {L}}_{n}(\theta \,;\mathbf {y} )~.}"><u></u></span><span style="font-family:Arial,sans-serif;color:rgb(32,33,34)"><u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:11pt"><u></u> <u></u></span></p>
<p class="MsoNormal"><span style="font-size:11pt">So, by default, HE ALSO HIDES ALL THE BAD SOLUTIONS AND DOESN’T REPORT THEM. He never talks about all of this. He never mentions that I had talked about this in particular.<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:11pt"><u></u> <u></u></span></p>
<p class="MsoNormal"><span style="font-size:11pt">I would suggest that based on his dishonest accusations against others and, in particular, against one of the plenary speakers here at the conference, that IEEE take some action against him. This nonsense
has been going on for a longtime and it’s time for some action. <u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:11pt"><u></u> <u></u></span></p>
<p class="MsoNormal"><span style="font-size:11pt">By the way, I am not a member of IEEE. I am expressing my opinion only because he has falsely accused me also and I had enough of it. I have added Danilo Mandic and Irwin King to the list.<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:11pt"><u></u> <u></u></span></p>
<p class="MsoNormal"><span style="font-size:11pt">Thanks,<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:11pt">Asim Roy<u></u><u></u></span></p>
<p class="MsoNormal">Professor, Information Systems<u></u><u></u></p>
<p class="MsoNormal">Arizona State University<u></u><u></u></p>
<p class="MsoNormal"><a href="https://search.asu.edu/profile/9973" target="_blank">Asim Roy | ASU Search</a><u></u><u></u></p>
<p class="MsoNormal"><a href="https://lifeboat.com/ex/bios.asim.roy" target="_blank">Lifeboat Foundation Bios: Professor Asim Roy</a><span style="font-size:11pt"><u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:11pt"><u></u> <u></u></span></p>
<div style="border-right:none;border-bottom:none;border-left:none;border-top:1pt solid rgb(225,225,225);padding:3pt 0in 0in">
<p class="MsoNormal"><b><span style="font-size:11pt;font-family:Calibri,sans-serif">From:</span></b><span style="font-size:11pt;font-family:Calibri,sans-serif"> Juyang Weng <<a href="mailto:juyang.weng@gmail.com" target="_blank">juyang.weng@gmail.com</a>>
<br>
<b>Sent:</b> Wednesday, July 3, 2024 5:54 PM<br>
<b>To:</b> Russell T. Harrison <<a href="mailto:r.t.harrison@ieee.org" target="_blank">r.t.harrison@ieee.org</a>><br>
<b>Cc:</b> Akira Horose <<a href="mailto:ahirose@ee.t.u-tokyo.ac.jp" target="_blank">ahirose@ee.t.u-tokyo.ac.jp</a>>; Hisao Ishibuchi <<a href="mailto:hisao@sustech.edu.cn" target="_blank">hisao@sustech.edu.cn</a>>; Simon See <<a href="mailto:ssee@nvidia.com" target="_blank">ssee@nvidia.com</a>>; Kenji Doya <<a href="mailto:doya@oist.jp" target="_blank">doya@oist.jp</a>>; Robert Kozma <<a href="mailto:rkozma55@gmail.com" target="_blank">rkozma55@gmail.com</a>>; Simon See <<a href="mailto:Simon.CW.See@gmail.com" target="_blank">Simon.CW.See@gmail.com</a>>; Yaochu Jin <<a href="mailto:Yaochu.Jin@surrey.ac.uk" target="_blank">Yaochu.Jin@surrey.ac.uk</a>>;
Xin Yao <<a href="mailto:xiny@sustech.edu.cn" target="_blank">xiny@sustech.edu.cn</a>>; Asim Roy <<a href="mailto:ASIM.ROY@asu.edu" target="_blank">ASIM.ROY@asu.edu</a>>; <a href="mailto:amdnl@lists.cse.msu.edu" target="_blank">amdnl@lists.cse.msu.edu</a><br>
<b>Subject:</b> False "Great Leap Forward" in AI<u></u><u></u></span></p>
</div>
<p class="MsoNormal"><u></u> <u></u></p>
<div>
<div>
<p class="MsoNormal">Dear Asim,<u></u><u></u></p>
</div>
<div>
<p class="MsoNormal"> It is my great pleasure to finally have somebody who argued with me about this important subject. I have attached the summary of this important issue in pdf.<u></u><u></u></p>
</div>
<div>
<p class="MsoNormal"> I alleged widespread false data in AI from the following two misconducts:<br>
Misconduct 1: Cheating in the absence of a test.<u></u><u></u></p>
</div>
<div>
<p class="MsoNormal"> Misconduct 2: Hiding bad-looking data.<u></u><u></u></p>
</div>
<div>
<p class="MsoNormal"> The following is a series of events during WCCI 2024 in Yokohama Japan.<u></u><u></u></p>
</div>
<p class="MsoNormal">These examples showed that some active researchers in the WCCI community were probably not aware of the severity and urgency of the issue.<br>
July 1, in public eyes, Robert Cozma banned the chance for Simon See at NVidea to respond to my question pointing to a False "Great Leap Forward" in AI. <br>
July 1, Kenji Doya suggested something like "let misconduct go ahead without a correction" because the publications are not cited. But he still did not know that I alleged that AlphaFold as well as many almost all published Google's deep learning products
suffer from the same Post-Selection misconduct. <br>
July 1, Asim Roy said to me "We need to talk" but he did not stay around to talk. I had a long debate during the Banquet last night. He seems to imply that post-selections of few networks and hiding the performance information of the entire population
is "survival of the fittest". He did not seem to agree that all 3 billion human populations need to be taken into account in human evolution, at least a large number of samples like in human sensus.<br>
July 3, Yaochu Jin did not let me ask questions after a keynote talk. Later he seemed to admit that many people in AI only report the data they like.<u></u><u></u></p>
<div>
<p class="MsoNormal"> July 3, Kalanmoy Deb said that he just wanted to find a solution using genetic algorithms but did not know that his so-called solution did not have a test at all.<u></u><u></u></p>
<div>
<p class="MsoNormal"> July 1, I saw all books on the display on the Springer Table appear to suffer from Post-Selection misconduct.<u></u><u></u></p>
</div>
<div>
<p class="MsoNormal"> Do we have a false data flooded "Great Leap Forward" in AI? Why?<u></u><u></u></p>
</div>
<div>
<p class="MsoNormal"> I welcome all those interested to discuss this important issue.<br>
Best regards,<br>
-John Weng<br>
<span class="m_4096885990403298170gmailsignatureprefix">-- </span><u></u><u></u></p>
<div>
<div>
<p class="MsoNormal">Juyang (John) Weng<u></u><u></u></p>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div></blockquote></div><br clear="all"><div><br></div><span class="gmail_signature_prefix">-- </span><br><div dir="ltr" class="gmail_signature"><div dir="ltr">Juyang (John) Weng<br></div></div>