[CDSNL] ZOOM Meeting: False "Great Leap Forward" in AI
Juyang Weng
juyang.weng at gmail.com
Sun Nov 17 16:41:02 EST 2024
Dear All,
Following Prof. Howie Choset's suggestion, I would like to call for a
ZOOM meeting:
Join Zoom Meeting: False "Great Leap Forward" in AI
https://us02web.zoom.us/j/7914614617?pwd=bExLV0pIQlBwQ2ZscVNTVEF4T1pTUT09
Meeting ID: 791 461 4617
Passcode: WengZOOM
Time: 12:01pm, EST
Date: Tuesday, Nov. 19, 2024
All are welcome regardless of where you are and how you get this
information. Please spread the word!
Best regards,
-John Weng
On Sun, Nov 17, 2024 at 4:24 PM <choset at andrew.cmu.edu> wrote:
> All – This is nice conversation but I think an email back and forth is not
> working. If you want to have a discussion, then lets set up a zoom call.
> Happy to participate
>
>
>
> Howie
>
>
>
> Howie Choset, Professor http://biorobotics.org
> 412-268-2495
>
> Robotics Institute, Carnegie Mellon 5000 Forbes Ave, Pittsburgh, PA
> 15213
>
> Biomedical, Mechanical, and ECE Peggy: 412-268-7943
>
> *Co-founder*: Medrobotics, Hebi Robotics, Bito Robotics, Omnibus Medical
> Devices,
>
> Latent Robotics, Advanced Robotics For Manufacturing Institute;
>
> *Board Member*: PRN *Visiting Faculty* AI Institute
>
>
>
> *From:* Amdnl <amdnl-bounces at lists.cse.msu.edu> *On Behalf Of *Juyang Weng
> *Sent:* Sunday, November 17, 2024 3:55 PM
> *To:* Asim Roy <ASIM.ROY at asu.edu>
> *Cc:* amdnl at lists.cse.msu.edu; Akira Horose <ahirose at ee.t.u-tokyo.ac.jp>;
> Simon See <Simon.CW.See at gmail.com>; Russell T. Harrison <
> r.t.harrison at ieee.org>; Hisao Ishibuchi <hisao at sustech.edu.cn>; Ali Minai
> <minaiaa at gmail.com>; Robert Kozma <rkozma55 at gmail.com>; Irwin King <
> irwinking at gmail.com>; Marley Vellasco <marley at ele.puc-rio.br>; Danilo
> Mandic <d.mandic at imperial.ac.uk>; Xin Yao <xiny at sustech.edu.cn>; Kenji
> Doya <doya at oist.jp>; Simon See <ssee at nvidia.com>
> *Subject:* Re: [CDSNL] False "Great Leap Forward" in AI
>
>
>
> Dear Asim,
>
> I refuse to reply to your personal attack: "dishonesty". Otherwise, I
> would violate The Robert Rules of Order as you did several times before.
>
> I wrote, "Since you are a mathematician, please try to understand the
> Minimum Mean Square Error (MMSE) principle (Theorem 1 and Proof in [11]) in
> the attached preprint titled `Invalidity of the Experimental Protocol in
> Two Nobel Prizes'." From the term "mathematics" and the title `Invalidity
> of the Experimental Protocol in Two Nobel Prizes', you should address
> Theorem 1 in mathematical correctness (instead of out of context).
> However, you failed to do so.
>
> Best regards,
>
> -John
>
>
>
> On Sun, Nov 17, 2024 at 2:03 PM Asim Roy <ASIM.ROY at asu.edu> wrote:
>
> Dear John,
>
>
>
> I wish you would stop this. You started by saying post-selection is not
> part of biology and I pointed out that it’s very much part of biology. I
> didn’t see any biological arguments in the attached note. You knew that
> very well and sending this paper is part of your dishonesty because it’s
> switching away from the biological argument.
>
>
>
> Your arguments in that note are also flawed. As I mentioned before, there
> is not much value in reporting all solutions. Think of the Olympics. Each
> country selects their best athletes. You are asking why they don’t report
> on the performance of all the athletes that competed. The data is there and
> each country might have internal use for it, but, in the end, they send
> their best ones. At the Olympics, we award the top three with medals. You
> can ask the Olympic committee to provide the average time for the 100
> meters dash, but that does not have much value. The data is there and could
> be interest to someone. But, to the general public, the medal winners
> matter. There’s no “misdeed” or “cheating” or “hiding” in this process.
> Olympics, in general, is a very fair process. It’s all based on the winners
> that year and under whatever conditions existed that year (temperature,
> weather, etc.).
>
>
>
> I think you are misguided in your arguments both on the statistical side
> and the biological side. Reporting all results may not be of much interest
> in general. I am sure, these competition organizers can disclose that
> information if needed. But there is no “cheating” or “hiding” anything. The
> rules of these competition are well-defined.
>
>
>
> If that article was your final argument, then let’s stop emailing at this
> point.
>
>
>
> Best,
>
> Asim
>
>
>
> *From:* Juyang Weng <juyang.weng at gmail.com>
> *Sent:* Sunday, November 17, 2024 9:10 AM
> *To:* Asim Roy <ASIM.ROY at asu.edu>
> *Cc:* Dongshu Wang (王东署) <wangdongshu at zzu.edu.cn>; Russell T. Harrison <
> r.t.harrison at ieee.org>; Akira Horose <ahirose at ee.t.u-tokyo.ac.jp>; Hisao
> Ishibuchi <hisao at sustech.edu.cn>; Simon See <ssee at nvidia.com>; Kenji Doya
> <doya at oist.jp>; Robert Kozma <rkozma55 at gmail.com>; Simon See <
> Simon.CW.See at gmail.com>; Yaochu Jin <Yaochu.Jin at surrey.ac.uk>; Xin Yao <
> xiny at sustech.edu.cn>; amdnl at lists.cse.msu.edu; Danilo Mandic <
> d.mandic at imperial.ac.uk>; Irwin King <irwinking at gmail.com>; Jose Principe
> <principe at cnel.ufl.edu>; Marley Vellasco <marley at ele.puc-rio.br>; Ali
> Minai <minaiaa at gmail.com>; Kim Plunkett <kim.plunkett at psy.ox.ac.uk>
> *Subject:* Re: False "Great Leap Forward" in AI
>
>
>
> Dear Asim,
>
> We must not discuss issues in the context.
>
> You wrote, "When we report results of various algorithms, we are all
> aware that we report the best results." This is out of context. To report
> the behavior of an algorithm, we must report the distribution of its
> behavior, NOT the luckiest result.
>
> Since you are a mathematician, please try to understand the Minimum
> Mean Square Error (MMSE) principle (Theorem 1 and Proof in [11]) in the
> attached preprint titled "Invalidity of the Experimental Protocol in Two
> Nobel Prizes".
>
> Best regards,
>
> -John
>
>
>
> On Sat, Nov 16, 2024 at 3:57 PM Asim Roy <ASIM.ROY at asu.edu> wrote:
>
> Dear John,
>
>
>
> I wish I can stop this nonsense argument. When we report results of
> various algorithms, we are all aware that we report the best results.
> That’s accepted in science. Biological systems also do trial and error
> processes and learn from it and then pick and use the best process. That’s
> how we operate, as biological systems. We don’t always report our failures
> and I think you are calling “not reporting” cheating and hiding. Of course,
> we as individuals, in that sense, do cheat and hide.
>
>
>
> You are trying to say that in our papers, we should be reporting the bad
> results. That’s not a necessity and never been a necessity. I come from an
> optimization background and have always reported the best results found by
> an algorithm. Even if we had reported the bad results, there is no value in
> it.
>
>
>
> Hope you can carry on your arguments within your community without
> accusing others of “cheating” and “misdeeds.” If we see real misdeeds in
> reporting results, we have a fairly good system to protect against that.
> There are many recent cases.
>
>
>
> Again, please carry on your nonsense discussion within your community that
> subscribes to these views. No need to come to conference and loudly accuse
> people of “misdeeds.” I think I speak for many.
>
>
>
> Best,
>
> Asim
>
>
>
> *From:* Juyang Weng <juyang.weng at gmail.com>
> *Sent:* Saturday, November 16, 2024 1:24 PM
> *To:* Asim Roy <ASIM.ROY at asu.edu>
> *Cc:* Dongshu Wang (王东署) <wangdongshu at zzu.edu.cn>; Russell T. Harrison <
> r.t.harrison at ieee.org>; Akira Horose <ahirose at ee.t.u-tokyo.ac.jp>; Hisao
> Ishibuchi <hisao at sustech.edu.cn>; Simon See <ssee at nvidia.com>; Kenji Doya
> <doya at oist.jp>; Robert Kozma <rkozma55 at gmail.com>; Simon See <
> Simon.CW.See at gmail.com>; Yaochu Jin <Yaochu.Jin at surrey.ac.uk>; Xin Yao <
> xiny at sustech.edu.cn>; amdnl at lists.cse.msu.edu; Danilo Mandic <
> d.mandic at imperial.ac.uk>; Irwin King <irwinking at gmail.com>; Jose Principe
> <principe at cnel.ufl.edu>; Marley Vellasco <marley at ele.puc-rio.br>; Ali
> Minai <minaiaa at gmail.com>; Jay McClelland <jlmcc at stanford.edu>; Kim
> Plunkett <kim.plunkett at psy.ox.ac.uk>
> *Subject:* Re: False "Great Leap Forward" in AI
>
>
>
> Dear Asim,
>
> You shallowly mentioned the term "biology", but you still have no
> biological substances.
>
> I said (1) cheating, (2) hiding and (3) exaggeration are not what
> biology does.
>
> (a) You wrote, "Think about natural selection and survival of the
> fittest." Not applicable. For example, get back to my example about
> Adolf Hitler. Adolf Hitler still lives. He just died from an unnatural
> course. We must report Adolf Hitler, must not hide him, and must not
> exaggerate that he is not in human statistics.
>
> (b) You wrote, "Think about trying out different strategies for doing
> something and then selecting the best." Not applicable, at least you did
> not give any biological evidence that biology did (1) cheating, (2) hiding,
> and (3) exaggeration (e.g., at the genes level). At the agent level. a
> human is conscious to do (1) cheating, (2) hiding, and (3) exaggeration,
> but the facts of all agent level of biology (including his failure cases)
> must be reported.
>
> (c) You wrote, "Think about learning from past failures." Not
> applicable. Whatever he learns, he cannot (1) cheat, (2) hide, and (2)
> exaggerate the mean of his prediction accuracy. Again, learning from
> Hitler's failure does not mean that you can hide his case and not report
> about him.
>
> I give you a hint: As I wrote with Jay McClelland and K. Plunkett
> (CCed), ``Convergent Approaches to the Understanding of Autonomous Mental
> Development,'' editorial for the Special Issue on Autonomous Mental
> Development in the
>
> IEEE Transactions on Evolutionary Computation, vol. 18, no. 2, 2007: No
> evolutional works that we have seen then (and it seems to be true after 17
> years), no evolutional methods do development. Learning is in development
> (that DNs do), not in meiosis (look it up).
>
>
>
> Dear Xin Yao,
>
> Have you done development (e.g., each life must succeed in lifetime
> learning from birth to death)?
>
> Best regards,
>
> -John
>
>
>
>
>
>
>
> On Fri, Nov 15, 2024 at 9:43 PM Asim Roy <ASIM.ROY at asu.edu> wrote:
>
> Dear John,
>
>
>
> I should not be replying to your email. I did mention last time that I
> have interest in continuing this nonsense dialogue. And, I did provide you
> with some examples to justify the biological basis of post-selection. Read
> them carefully. I can add to them. Think about natural selection and
> survival of the fittest. Think about trying out different strategies for
> doing something and then selecting the best. Think about learning from past
> failures. They are all about post-selection, rejecting the bad solutions
> and picking the best. And this is at the individual level. That’s biology.
>
>
>
> As I stated before, I have no interest in continuing this nonsense
> discussion. And I have no idea who is listening to you and subscribes to
> your views. You are free to continue your discussion within your community.
>
>
>
> Best,
>
> Asim
>
>
>
> *From:* Juyang Weng <juyang.weng at gmail.com>
> *Sent:* Thursday, November 14, 2024 7:59 PM
> *To:* Asim Roy <ASIM.ROY at asu.edu>
> *Cc:* Dongshu Wang (王东署) <wangdongshu at zzu.edu.cn>; Russell T. Harrison <
> r.t.harrison at ieee.org>; Akira Horose <ahirose at ee.t.u-tokyo.ac.jp>; Hisao
> Ishibuchi <hisao at sustech.edu.cn>; Simon See <ssee at nvidia.com>; Kenji Doya
> <doya at oist.jp>; Robert Kozma <rkozma55 at gmail.com>; Simon See <
> Simon.CW.See at gmail.com>; Yaochu Jin <Yaochu.Jin at surrey.ac.uk>; Xin Yao <
> xiny at sustech.edu.cn>; amdnl at lists.cse.msu.edu; Danilo Mandic <
> d.mandic at imperial.ac.uk>; Irwin King <irwinking at gmail.com>; Jose Principe
> <principe at cnel.ufl.edu>; Marley Vellasco <marley at ele.puc-rio.br>; Ali
> Minai <minaiaa at gmail.com>
> *Subject:* Re: False "Great Leap Forward" in AI
>
>
>
> Dear Asim,
>
> I am trying to include your last response in the attached newsletter,
> but I cannot because it does not have substance.
>
> You wrote, "John knows fully well that he is falsely accusing others of
> “cheating” and “misdeeds” when post-selection has a biological basis."
>
> But you do not have any substance to substantiate your single-sentence
> claim.
>
> Post-selection does not have a biological basis, because biology (1)
> does not cheat, (2) does not hide, and (3) does not exaggerate prediction
> accuracy.
>
> You will receive the upcoming newsletter Vol. 18, No. 4, 2024 with more
> details about (1), (2) and (3) if you subscribe to the Newsletter. Let me
> know if you cannot find the subscription site.
>
> Best regards,
>
> -John
>
>
>
>
>
> On Wed, Jul 17, 2024 at 7:21 PM Asim Roy <ASIM.ROY at asu.edu> wrote:
>
> Dear John,
>
>
>
> This is your typical dishonesty while you accuse others of “cheating” and
> “misdeeds.” I sent the attached reply to you on July 14 and it could have
> been easily included in your newsletter. Everyone knows that the newsletter
> is online and can be easily amended to include this reply, which is
> attached. So, I request Dongshu Wang to include this in Issue No. 3 because
> it has continuity with the other arguments in that issue. I have no
> interest in starting another nonsense dialogue that you mention.
>
>
>
> To All:
>
>
>
> John knows fully well that he is falsely accusing others of “cheating” and
> “misdeeds” when post-selection has a biological basis.
>
>
>
> Best,
>
> Asim
>
>
>
>
>
> *From:* Juyang Weng <juyang.weng at gmail.com>
> *Sent:* Wednesday, July 17, 2024 2:24 PM
> *To:* Asim Roy <ASIM.ROY at asu.edu>
> *Cc:* Dongshu Wang (王东署) <wangdongshu at zzu.edu.cn>; Russell T. Harrison <
> r.t.harrison at ieee.org>; Akira Horose <ahirose at ee.t.u-tokyo.ac.jp>; Hisao
> Ishibuchi <hisao at sustech.edu.cn>; Simon See <ssee at nvidia.com>; Kenji Doya
> <doya at oist.jp>; Robert Kozma <rkozma55 at gmail.com>; Simon See <
> Simon.CW.See at gmail.com>; Yaochu Jin <Yaochu.Jin at surrey.ac.uk>; Xin Yao <
> xiny at sustech.edu.cn>; amdnl at lists.cse.msu.edu; Danilo Mandic <
> d.mandic at imperial.ac.uk>; Irwin King <irwinking at gmail.com>; Jose Principe
> <principe at cnel.ufl.edu>; Marley Vellasco <marley at ele.puc-rio.br>; Ali
> Minai <minaiaa at gmail.com>
> *Subject:* Re: False "Great Leap Forward" in AI
>
>
>
> Dear Asim,
>
> 1. The Newsletter should not be altered after its publication on July
> 16, 2024.
>
> 2. I have not had time to read your previous email as a formal review
> for Issue Vol. 18, No. 3 either before its publication. You changed
> my mind too late.
>
> 3. Please consider submitting an [AI Crisis] Dialogue for Issue Vol.
> 18, No. 4 instead.
>
> Best regards,
>
> -John
>
>
>
> On Wed, Jul 17, 2024 at 4:47 AM Asim Roy <ASIM.ROY at asu.edu> wrote:
>
> Dear John,
>
>
>
> Please have Dongshu Wang publish my last reply in your just published
> newsletter. It refutes your claim that post-selection is not a biological
> process. And all the other nonsense claims about the need to publish
> non-optimal solutions.
>
>
>
> The newsletter is online and my last note (attached) can be easily added
> to it and a notification sent to all your subscribers about my last reply.
> Otherwise, I will consider it a dishonesty on your part.
>
>
>
> Best,
>
> Asim
>
>
>
> *From:* Juyang Weng <juyang.weng at gmail.com>
> *Sent:* Tuesday, July 16, 2024 11:53 AM
> *To:* Asim Roy <ASIM.ROY at asu.edu>
> *Cc:* Russell T. Harrison <r.t.harrison at ieee.org>; Akira Horose <
> ahirose at ee.t.u-tokyo.ac.jp>; Hisao Ishibuchi <hisao at sustech.edu.cn>;
> Simon See <ssee at nvidia.com>; Kenji Doya <doya at oist.jp>; Robert Kozma <
> rkozma55 at gmail.com>; Simon See <Simon.CW.See at gmail.com>; Yaochu Jin <
> Yaochu.Jin at surrey.ac.uk>; Xin Yao <xiny at sustech.edu.cn>;
> amdnl at lists.cse.msu.edu; Danilo Mandic <d.mandic at imperial.ac.uk>; Irwin
> King <irwinking at gmail.com>; Jose Principe <principe at cnel.ufl.edu>; Marley
> Vellasco <marley at ele.puc-rio.br>; Ali Minai <minaiaa at gmail.com>
> *Subject:* Re: False "Great Leap Forward" in AI
>
>
>
> Dear Asim,
>
> Sorry, this response is too late for Vol. 18, No. 3, 2024. You wrote
> that you would not respond anymore.
>
> I saw it just now after publishing No. 3. See CDS TC Newsletter Vol.
> 18, No. 3, 2024
> <https://urldefense.com/v3/__https:/www.cse.msu.edu/amdtc/amdnl/CDSNL-V18-N3.pdf__;!!IKRxdwAv5BmarQ!df8y5hJWRutTV1ot7ePop959eE92GM_dhD75tdtWYF2lZEofBYgmsLc1_7pL8NhOv4PIlQJbPMVSWs_LQNxiumsb$>
>
> I suggest that you compose the material as a formal [AI Crisis]
> Dialogue and submit it to me (the dialogue initiator) with a CC to EIC
> Dongshu Wang. Otherwise, I will include it in issue Vol. 18, No. 4, 2024
> which will appear in Nov. 2024.
>
> Best regards,
>
> -John
>
>
>
> On Mon, Jul 15, 2024 at 12:10 AM Asim Roy <ASIM.ROY at asu.edu> wrote:
>
> Dear John,
>
>
>
> This will be my last response to the issues you have raised. Since you are
> posting my responses in your newsletter, please post them without any
> changes. I am going to be selective in my response since I don’t think the
> rest of your arguments matter that much.
>
>
>
> 1)
>
> *Asim Roy wrote*, "In fact, there is plenty of evidence in biology that
> it can create new circuits and reuse old circuits and cells/neurons. Thus,
> throwing out bad solutions happens in biology too."
>
> *John Weng’s response*: *This is irrelevant*, as your mother is not
> inside your skull, but a human programmer is doing that inside the "skull."
>
> *Asim Roy response*: By saying “this is irrelevant,” you are admitting
> that “throwing out bad solutions happens in biology too." You have not
> contested that claim. If throwing out bad solutions happens in biology,
> there is nothing wrong in replicating that process in post-hoc selection of
> good solutions. It is similar to a biological process. Post-hoc selection
> is the main issue in all of your arguments and I think you should apologize
> to all for making a false accusation, that the post-hoc selection process
> doesn’t have a biological basis.
>
>
>
> 2)
>
> *Asim Roy wrote*, "I still recall Horace Barlow’s ... note to me on the
> grandmother cell theory: ... though I fear that what I have written will
> not be universally accepted, at least at first!”.
>
> *John Weng’s response*: If you understand DN3, the first model for
> conscious learning that starts from a single cell, you will see how the
> grandmother cell theory is naive.
>
> *Asim Roy response*: The existence of grandmother-type cells will not be
> proven by any mathematical model, least of all by your DN3 model. The
> existence will be proven by further neurophysiological studies. By the way,
> I challenge you to create the kind of abstract cells like the Jennifer
> Aniston cell with your development network DN3. Take a look at the
> concept cell findings (Jennifer Aniston cells). Here’s from Reddy and
> Thorpe (2014)
> <https://urldefense.com/v3/__https:/www.frontiersin.org/journals/neuroscience/articles/10.3389/fnins.2020.00059/full*B6__;Iw!!IKRxdwAv5BmarQ!df8y5hJWRutTV1ot7ePop959eE92GM_dhD75tdtWYF2lZEofBYgmsLc1_7pL8NhOv4PIlQJbPMVSWs_LQHJKuM0B$>:
> “concept cells have “*meaning* of a given stimulus in a manner that is
> *invariant* to different representations of that stimulus.” Can you
> replicate that phenomenon in DN3?
>
>
>
> 3)
>
> The following arguments are so basic about optimization that it would be
> silly to try to respond to them with such scholars in the field.
>
>
>
> *Asim Roy:* "he does use an optimization method to weed out bad
> solutions."
>
> *John Weng: *This is false. DN does not weed out bad solutions, since
> it has only one solution.
>
> *Asim’s Response*: Just imagine, he claims he finds a globally optimal
> solution in a complex network without weeding out bad solutions. That is
> almost magical.
>
>
>
> *Asim Roy:* "In optimization, we only report the best solution."
>
> *John Weng:* This is misconduct, if you hide bad-looking data, like
> hiding all other students in your class.
>
> *Asim’s Response*: My god, how do I respond to this!!! That’s what we do
> in the optimization field. No one ever told me or anyone else that
> reporting the best solution is misconduct.
>
>
>
> *Asim Roy:* "There is no requirement to report any non-optimal
> solutions."
>
> *John Weng:* This is not true for scientific papers and business reports.
>
> *Asim’s Response*: Again, how do I respond to that. We do that all the
> time.
>
>
>
> *Asim Roy:* "If someone is doing part of the optimization manually,
> post-hoc, there is nothing wrong with that either."
>
> *John Weng*: This is false because the so-called post-hoc solution did
> not have a test!
>
> *Asim’s Response*: For Imagenet and other competitions, there is always
> an independent test set. When we create our own data, we do
> cross-validation and other kinds of random training and testing. What is he
> talking about?
>
>
>
> John, this is my last response. You can post it in your newsletter, but
> without any changes. I will not respond anymore. I think I have responded
> to your fundamental argument, that post-selection is non-biological. It is
> indeed biological and you have admitted that.
>
>
>
> Thanks,
>
> Asim Roy
>
> Professor, Information Systems
>
> Arizona State University
>
> Asim Roy | ASU Search <https://search.asu.edu/profile/9973>
>
> Lifeboat Foundation Bios: Professor Asim Roy
> <https://urldefense.com/v3/__https:/lifeboat.com/ex/bios.asim.roy__;!!IKRxdwAv5BmarQ!aCvWF-PEaRtFT0lr5G-TVd1WSX7BloN_D524nbIUhctg9BC609q63-E91LYTCtXzoEQMZbkc5gnl53le6QZXPE1y$>
>
>
>
>
>
>
>
>
>
>
>
>
>
> *From:* Juyang Weng <juyang.weng at gmail.com>
> *Sent:* Friday, July 5, 2024 6:23 PM
> *To:* Asim Roy <ASIM.ROY at asu.edu>
> *Cc:* Russell T. Harrison <r.t.harrison at ieee.org>; Akira Horose <
> ahirose at ee.t.u-tokyo.ac.jp>; Hisao Ishibuchi <hisao at sustech.edu.cn>;
> Simon See <ssee at nvidia.com>; Kenji Doya <doya at oist.jp>; Robert Kozma <
> rkozma55 at gmail.com>; Simon See <Simon.CW.See at gmail.com>; Yaochu Jin <
> Yaochu.Jin at surrey.ac.uk>; Xin Yao <xiny at sustech.edu.cn>;
> amdnl at lists.cse.msu.edu; Danilo Mandic <d.mandic at imperial.ac.uk>; Irwin
> King <irwinking at gmail.com>; Jose Principe <principe at cnel.ufl.edu>; Marley
> Vellasco <marley at ele.puc-rio.br>
> *Subject:* Re: False "Great Leap Forward" in AI
>
>
>
> Dear Asim,
>
> Thank you for your response, so that people on this email list can
> get important benefits. The subject is very new. I can raise these
> misconducts because we have a holistic solution to the 20
> million-dollar problems.
>
> You wrote, "he does use an optimization method to weed out bad
> solutions." This is false. DN does not weed out bad solutions, since it
> has only one solution.
>
> You wrote, "In optimization, we only report the best solution." This
> is misconduct, if you hide bad-looking data, like hiding all other students
> in your class.
>
> You wrote, "There is no requirement to report any non-optimal
> solutions." This is not true for scientific papers and business reports.
>
> You wrote, "If someone is doing part of the optimization manually,
> post-hoc, there is nothing wrong with that either." This is false because
> the so-called post-hoc solution did not have a test!
>
> You wrote, "In fact, there is plenty of evidence in biology that it
> can create new circuits and reuse old circuits and cells/neurons. Thus,
> throwing out bad solutions happens in biology too." This is irrelevant,
> as your mother is not inside your skull, but a human programmer is doing
> that inside the "skull."
>
> You wrote, "at a higher level, there’s natural selection and survival
> of the fittest. So, building many solutions (networks) and picking the best
> fits well with biology." As I wrote before, this is false, since biology
> has built Aldof Hitler and many German soldiers who acted during the Second
> World War. We report them, not hiding them.
>
> You wrote, "John calls this process `cheating' and a `misdeed".”
> Yes, I still do.
>
> You wrote, "he claims his algorithm gets the globally optimal
> solution, doesn’t get stuck in any local minima." This is true, since we
> do not have a single objective function as you assumed. Such a single
> objective function is a restricted environment or government. Instead,
> the maximum likelihood computation in DN is conducted in a distributed way
> by all neurons, each of them having its own maximum likelihood mechanism
> (optimal Hebbain mechanism). Read a book, Juyang Weng, Natural and
> Artificial Intelligence, available at Amonzon.
>
> You wrote, "If that is true, he should get far better results than the
> folks who are “cheating” through post-selection." Off course, we did as
> early as 2016. See "Luckiest from Post vs Single DN" in the attached
> file 2024-06-30-IJCNN-Tutorial-1page.pdf. Furthermore, the luckiest
> from the cheating is only a fitting error on the validation set (not test),
> the single DN is a test error because DN does not fit the validation set.
> The latter should not be compared with the former, but we compared with
> them anyway.
>
> You wrote, "My hunch is, his algorithm falls short and can’t compete
> with the other ones." Your hunch is wrong. See above as you can see
> how wrong you are. DN is a lot better than even the false performance.
>
> You wrote, "And that’s the reason for this outrage against others."
> I am honest. All others should be honest too. Do not cheat like many
> Chinese in the Great Leap Forward.
>
> You wrote, "I would again urge IEEE to take action against John Weng
> for harassing plenary speakers at this conference and accusing them of
> “misdeeds.” I am simply trying to exercise my freedom of speech driven
> by my care for our community.
>
> Do you all see a "Great Leap Forward in AI" like the "Great Leap
> Forward" in 1958 in China?
>
> Best regards,
>
> -John
>
>
>
> On Fri, Jul 5, 2024 at 9:01 AM Asim Roy <ASIM.ROY at asu.edu> wrote:
>
> Dear All,
>
>
>
> Without getting into the details of his DN algorithm, he does use an
> optimization method to weed out bad solutions. In optimization, we only
> report the best solution. There is no requirement to report any non-optimal
> solutions. If someone is doing part of the optimization manually, post-hoc,
> there is nothing wrong with that either. In fact, there is plenty of
> evidence in biology that it can create new circuits and reuse old circuits
> and cells/neurons. Thus, throwing out bad solutions happens in biology too.
> And, of course, at a higher level, there’s natural selection and survival
> of the fittest. So, building many solutions (networks) and picking the best
> fits well with biology. However, John calls this process “cheating” and a
> “misdeed.” He also claims having a strong background in biology. So he
> should be aware of these processes.
>
>
>
> In addition, he claims his algorithm gets the globally optimal solution,
> doesn’t get stuck in any local minima. If that is true, he should get far
> better results than the folks who are “cheating” through post-selection. He
> should be able to demonstrate his superior solutions through the public
> competitions such as with Imagenet data. My hunch is, his algorithm falls
> short and can’t compete with the other ones. And that’s the reason for this
> outrage against others.
>
>
>
> I would again urge IEEE to take action against John Weng for harassing
> plenary speakers at this conference and accusing them of “misdeeds.”
>
>
>
> Best,
>
> Asim
>
>
>
> *From:* Juyang Weng <juyang.weng at gmail.com>
> *Sent:* Thursday, July 4, 2024 8:14 AM
> *To:* Asim Roy <ASIM.ROY at asu.edu>
> *Cc:* Russell T. Harrison <r.t.harrison at ieee.org>; Akira Horose <
> ahirose at ee.t.u-tokyo.ac.jp>; Hisao Ishibuchi <hisao at sustech.edu.cn>;
> Simon See <ssee at nvidia.com>; Kenji Doya <doya at oist.jp>; Robert Kozma <
> rkozma55 at gmail.com>; Simon See <Simon.CW.See at gmail.com>; Yaochu Jin <
> Yaochu.Jin at surrey.ac.uk>; Xin Yao <xiny at sustech.edu.cn>;
> amdnl at lists.cse.msu.edu; Danilo Mandic <d.mandic at imperial.ac.uk>; Irwin
> King <irwinking at gmail.com>
> *Subject:* Re: False "Great Leap Forward" in AI
>
>
>
> Dear Asim and All,
>
> I am happy that Asim responded so that he gave us all an opportunity to
> interactively participate in an academic discussion. We can defeat the
> false "Great Leap Forward".
>
> During the banquet of July 3, 2024, I was trying to explain to Asim why
> our Developmental Network (DN) only trains a single network, not multiple
> networks as all other methods do (e.g., neural networks with
> error-backprop, genetic algorithms, and fuzzy sets). (Let me know if there
> are other methods where one network is optimal and therefore is free from
> the local minima problem.)
>
> This single-network property is important because normally every
> developmental network (genome) must succeed in single-network development,
> from inception to birth, to death.
>
> Post-selection: A human programer trains multiple (n>1) predictors
> based on a fit set F, and then picks up the luckiest predictor based on a
> validation set (which is in the possession of the program). He suffers from
> the following two misconducts:
> Misconduct 1: Cheating in the absence of a test (because the test set
> T is absent).
>
> Misconduct 2: Hiding bad-looking data (other less lucky predictors).
>
> A. I told Asim that DN tests its performance from birth to death,
> across the entire life!
>
> B. I told Asim that DN does not hide any data because it trains a
> single brain and reports all its lifetime errors!
>
> Asim did not read our DN papers that I sent to him, or did not read
> them carefully, especially the proof of the maximum likelihood of DN-1.
> See Weng IJIS 2015,
> https://www.scirp.org/journal/paperinformation?paperid=53728
> <https://urldefense.com/v3/__https:/www.scirp.org/journal/paperinformation?paperid=53728__;!!IKRxdwAv5BmarQ!aCvWF-PEaRtFT0lr5G-TVd1WSX7BloN_D524nbIUhctg9BC609q63-E91LYTCtXzoEQMZbkc5gnl53le6ch2TgX4$>
> .
>
> At the banquet, I told Asim that the representation of DN is
> "distributed" like the brain and it collectively computes the maximum
> likelihood representation by very neuron using a limited resource and a
> limited amount of life experience. I told him that every brain is
> optimal, including his brain, my brain, and Aldolf Hitler's brain.
> However, every brain has a different experience. However, Asim apparently
> did not understand me and did not continue to ask what I meant by
> "distributed" maximum likelihood representation. Namely, every neuron
> incrementally computes the maximum likelihood representation of its own
> competition zone.
>
> Asim gave an expression about the maximum likelihood implying that
> every nonlinear objective function has many local minima! That seems to
> be a lack of understanding of my proof in IJIS 2015.
>
> (1) I told Asim that every (positive) neuron computes its competitors
> automatically (assisted by its dedicated negative neuron), so that every
> (positive) neuron has a different set of (positive) neuronal competitors.
> Because every neuron has a different competition zone, the maximum
> likelihood representation is distributed.
>
> (2) Through the distributed computing by all (limited number of)
> neurons that work together inside the DN, the DN computes the distributed
> maximum likelihood representations. Namely, every (positive) neuron
> computes its maximum likelihood representation incrementally for its unique
> competition zone. This is proven in IJIS 2015, based on the
> dual-optimality of Lobe Component Analysis. Through the proof, you can
> see how LCA converts a highly nonlinear problem for each neuron into a
> linear problem for each neuron, by defining observation as a
> response-weighted input (i.e., dually-optimal Hebbian learning). Yes, with
> this beautifully converted linear problem (inspired by the brain), neuronal
> computation becomes computing an incremental mean through time in every
> neuron. Therefore, a highly nonlinear problem of computing lobe components
> becomes a linear one. We know that there is no local minima problem in
> computing the mean of a time sequence.
>
> (3) As I presented in several of my IJCNN tutorials, neurons in DN
> start from random weights, but different random weights lead to the same
> network, because the initial weights only change the neuronal resources,
> but not the resulting network.
>
> In summary, the equation that Asim listed is for each neuron, but each
> neuron has a different instance of the expression. There is no search,
> not that Asim implied (without saying)! This corresponds to a holistic
> solution the 20-million dollar problems (i.e., the local minuma problem
> solved by the maximum-likelihood optimality). See
> https://ieeexplore.ieee.org/document/9892445
> <https://urldefense.com/v3/__https:/ieeexplore.ieee.org/document/9892445__;!!IKRxdwAv5BmarQ!aCvWF-PEaRtFT0lr5G-TVd1WSX7BloN_D524nbIUhctg9BC609q63-E91LYTCtXzoEQMZbkc5gnl53le6cX7cdu2$>
>
> However, all other learning algorithms have not solved this local
> minima problem. Therefore, they have to resort to trials and errors
> through training many predictors.
>
> Do you have any more questions?
> Best regards,
>
> -John
>
>
>
> On Thu, Jul 4, 2024 at 4:20 PM Asim Roy <ASIM.ROY at asu.edu> wrote:
>
> Dear All,
>
>
>
> There’s quite a bit of dishonesty here. John Weng can be accused of the
> same “misconduct” that he is accusing others of. He didn’t quite disclose
> what we discussed at the banquet last night. He is hiding all that.
>
>
>
> His basic argument is that we pick the best solution and report results on
> that basis. In a sense, when you formulate a machine learning problem as an
> optimization problem, that’s essentially what you are trying to do – get
> the best solution and weed out the bad ones. And HE DOES THE SAME IN HIS
> DEVELOPMENT NETWORK. When I asked him how his DN algorithm learns, he said
> it uses the maximum likelihood method, which is an old statistical method (Maximum
> likelihood estimation - Wikipedia
> <https://urldefense.com/v3/__https:/en.wikipedia.org/wiki/Maximum_likelihood_estimation__;!!IKRxdwAv5BmarQ!aCvWF-PEaRtFT0lr5G-TVd1WSX7BloN_D524nbIUhctg9BC609q63-E91LYTCtXzoEQMZbkc5gnl53le6RquWOgs$>).
> I quote from Wikipedia:
>
>
>
> The goal of maximum likelihood estimation is to find the values of the
> model parameters that *maximize the likelihood function over the
> parameter space*,[6]
> <https://urldefense.com/v3/__https:/en.wikipedia.org/wiki/Maximum_likelihood_estimation*cite_note-:0-6__;Iw!!IKRxdwAv5BmarQ!aCvWF-PEaRtFT0lr5G-TVd1WSX7BloN_D524nbIUhctg9BC609q63-E91LYTCtXzoEQMZbkc5gnl53le6VGxpEh8$> that
> is
>
> 𝜃^=argmax𝜃∈Θ𝐿𝑛(𝜃;𝑦) .[image: {\displaystyle {\hat {\theta
> }}={\underset {\theta \in \Theta }{\operatorname {arg\;max} }}\,{\mathcal
> {L}}_{n}(\theta \,;\mathbf {y} )~.}]
>
>
>
> So, by default, HE ALSO HIDES ALL THE BAD SOLUTIONS AND DOESN’T REPORT
> THEM. He never talks about all of this. He never mentions that I had talked
> about this in particular.
>
>
>
> I would suggest that based on his dishonest accusations against others
> and, in particular, against one of the plenary speakers here at the
> conference, that IEEE take some action against him. This nonsense has been
> going on for a longtime and it’s time for some action.
>
>
>
> By the way, I am not a member of IEEE. I am expressing my opinion only
> because he has falsely accused me also and I had enough of it. I have added
> Danilo Mandic and Irwin King to the list.
>
>
>
> Thanks,
>
> Asim Roy
>
> Professor, Information Systems
>
> Arizona State University
>
> Asim Roy | ASU Search <https://search.asu.edu/profile/9973>
>
> Lifeboat Foundation Bios: Professor Asim Roy
> <https://urldefense.com/v3/__https:/lifeboat.com/ex/bios.asim.roy__;!!IKRxdwAv5BmarQ!aCvWF-PEaRtFT0lr5G-TVd1WSX7BloN_D524nbIUhctg9BC609q63-E91LYTCtXzoEQMZbkc5gnl53le6QZXPE1y$>
>
>
>
> *From:* Juyang Weng <juyang.weng at gmail.com>
> *Sent:* Wednesday, July 3, 2024 5:54 PM
> *To:* Russell T. Harrison <r.t.harrison at ieee.org>
> *Cc:* Akira Horose <ahirose at ee.t.u-tokyo.ac.jp>; Hisao Ishibuchi <
> hisao at sustech.edu.cn>; Simon See <ssee at nvidia.com>; Kenji Doya <
> doya at oist.jp>; Robert Kozma <rkozma55 at gmail.com>; Simon See <
> Simon.CW.See at gmail.com>; Yaochu Jin <Yaochu.Jin at surrey.ac.uk>; Xin Yao <
> xiny at sustech.edu.cn>; Asim Roy <ASIM.ROY at asu.edu>; amdnl at lists.cse.msu.edu
> *Subject:* False "Great Leap Forward" in AI
>
>
>
> Dear Asim,
>
> It is my great pleasure to finally have somebody who argued with me
> about this important subject. I have attached the summary of this
> important issue in pdf.
>
> I alleged widespread false data in AI from the following two
> misconducts:
> Misconduct 1: Cheating in the absence of a test.
>
> Misconduct 2: Hiding bad-looking data.
>
> The following is a series of events during WCCI 2024 in Yokohama Japan.
>
> These examples showed that some active researchers in the WCCI community
> were probably not aware of the severity and urgency of the issue.
> July 1, in public eyes, Robert Cozma banned the chance for Simon See at
> NVidea to respond to my question pointing to a False "Great Leap Forward"
> in AI.
> July 1, Kenji Doya suggested something like "let misconduct go ahead
> without a correction" because the publications are not cited. But he still
> did not know that I alleged that AlphaFold as well as many almost all
> published Google's deep learning products suffer from the same
> Post-Selection misconduct.
> July 1, Asim Roy said to me "We need to talk" but he did not stay
> around to talk. I had a long debate during the Banquet last night. He
> seems to imply that post-selections of few networks and hiding the
> performance information of the entire population is "survival of the
> fittest". He did not seem to agree that all 3 billion human populations
> need to be taken into account in human evolution, at least a large number
> of samples like in human sensus.
> July 3, Yaochu Jin did not let me ask questions after a keynote talk.
> Later he seemed to admit that many people in AI only report the data they
> like.
>
> July 3, Kalanmoy Deb said that he just wanted to find a solution using
> genetic algorithms but did not know that his so-called solution did not
> have a test at all.
>
> July 1, I saw all books on the display on the Springer Table appear to
> suffer from Post-Selection misconduct.
>
> Do we have a false data flooded "Great Leap Forward" in AI? Why?
>
> I welcome all those interested to discuss this important issue.
> Best regards,
> -John Weng
> --
>
> Juyang (John) Weng
>
>
>
>
> --
>
> Juyang (John) Weng
>
>
>
>
> --
>
> Juyang (John) Weng
>
>
>
>
> --
>
> Juyang (John) Weng
>
>
>
>
> --
>
> Juyang (John) Weng
>
>
>
>
> --
>
> Juyang (John) Weng
>
>
>
>
> --
>
> Juyang (John) Weng
>
>
>
>
> --
>
> Juyang (John) Weng
>
>
>
>
> --
>
> Juyang (John) Weng
>
--
Juyang (John) Weng
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.cse.msu.edu/pipermail/amdnl/attachments/20241117/fac42356/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image001.png
Type: image/png
Size: 183 bytes
Desc: not available
URL: <http://lists.cse.msu.edu/pipermail/amdnl/attachments/20241117/fac42356/attachment-0001.png>
More information about the Amdnl
mailing list