[CDSNL] False "Great Leap Forward" in AI
Juyang Weng
juyang.weng at gmail.com
Tue Jul 16 14:59:37 EDT 2024
Dear All,
This is the first time to have issue 3 in a year in the newsletter's
history!
The new issue Vol. 18, No. 3 is now available!
CDS TC Newsletter Vol. 18, No. 3, 2024
<https://www.cse.msu.edu/amdtc/amdnl/CDSNL-V18-N3.pdf>
IEEE CDS NEWSLETTERS
Volume 18, Number 3 ISSN 1550-1914 July 2024
Development of Natural and Artificial Intelligence
Contents
1 [AI Crisis] Dialogue 1: False “Great Leap Forward” in AI 2
2 [AI Crisis] Dialogue 2: Your DN Must Also Have Done Post-Selection 4
3 [AI Crisis] Dialogue 3: Why DN Does Not Do Post-Selection 5
4 [AI Crisis] Dialogue 4: Post-Selection in Biology and Show Comparison 7
5 [AI Crisis] Dialogue 5: Point-to-Point Replies to Dialogue 4 8
6 [AI Crisis] Dialogue 6: Cool Things a Bit and Present Views Openly 11
7 [AI Crisis] Dialogue 7: Let This AI Crisis Dialogue Continue 12
8 [Judicial Crisis] Dialogue Initiation: Do They Represent U.S. Courts? 13
9 The Dawn of Artificial Intelligence: Alan Turing and Early AI Research 18
10 IEEE TCDS Table of Contents 19
--
Juyang (John) Weng
On Tue, Jul 16, 2024 at 2:52 PM Juyang Weng <juyang.weng at gmail.com> wrote:
> Dear Asim,
> Sorry, this response is too late for Vol. 18, No. 3, 2024. You wrote
> that you would not respond anymore.
> I saw it just now after publishing No. 3. See CDS TC Newsletter Vol.
> 18, No. 3, 2024 <https://www.cse.msu.edu/amdtc/amdnl/CDSNL-V18-N3.pdf>
> I suggest that you compose the material as a formal [AI Crisis]
> Dialogue and submit it to me (the dialogue initiator) with a CC to EIC
> Dongshu Wang. Otherwise, I will include it in issue Vol. 18, No. 4, 2024
> which will appear in Nov. 2024.
> Best regards,
> -John
>
> On Mon, Jul 15, 2024 at 12:10 AM Asim Roy <ASIM.ROY at asu.edu> wrote:
>
>> Dear John,
>>
>>
>>
>> This will be my last response to the issues you have raised. Since you
>> are posting my responses in your newsletter, please post them without any
>> changes. I am going to be selective in my response since I don’t think the
>> rest of your arguments matter that much.
>>
>>
>>
>> 1)
>>
>> *Asim Roy wrote*, "In fact, there is plenty of evidence in biology that
>> it can create new circuits and reuse old circuits and cells/neurons. Thus,
>> throwing out bad solutions happens in biology too."
>>
>> *John Weng’s response*: *This is irrelevant*, as your mother is not
>> inside your skull, but a human programmer is doing that inside the "skull."
>>
>> *Asim Roy response*: By saying “this is irrelevant,” you are admitting
>> that “throwing out bad solutions happens in biology too." You have not
>> contested that claim. If throwing out bad solutions happens in biology,
>> there is nothing wrong in replicating that process in post-hoc selection of
>> good solutions. It is similar to a biological process. Post-hoc selection
>> is the main issue in all of your arguments and I think you should apologize
>> to all for making a false accusation, that the post-hoc selection process
>> doesn’t have a biological basis.
>>
>>
>>
>> 2)
>>
>> *Asim Roy wrote*, "I still recall Horace Barlow’s ... note to me on the
>> grandmother cell theory: ... though I fear that what I have written will
>> not be universally accepted, at least at first!”.
>>
>> *John Weng’s response*: If you understand DN3, the first model for
>> conscious learning that starts from a single cell, you will see how the
>> grandmother cell theory is naive.
>>
>> *Asim Roy response*: The existence of grandmother-type cells will not be
>> proven by any mathematical model, least of all by your DN3 model. The
>> existence will be proven by further neurophysiological studies. By the way,
>> I challenge you to create the kind of abstract cells like the Jennifer
>> Aniston cell with your development network DN3. Take a look at the
>> concept cell findings (Jennifer Aniston cells). Here’s from Reddy and
>> Thorpe (2014)
>> <https://www.frontiersin.org/journals/neuroscience/articles/10.3389/fnins.2020.00059/full#B6>:
>> “concept cells have “*meaning* of a given stimulus in a manner that is
>> *invariant* to different representations of that stimulus.” Can you
>> replicate that phenomenon in DN3?
>>
>>
>>
>> 3)
>>
>> The following arguments are so basic about optimization that it would be
>> silly to try to respond to them with such scholars in the field.
>>
>>
>>
>> *Asim Roy:* "he does use an optimization method to weed out bad
>> solutions."
>>
>> *John Weng: *This is false. DN does not weed out bad solutions, since
>> it has only one solution.
>>
>> *Asim’s Response*: Just imagine, he claims he finds a globally optimal
>> solution in a complex network without weeding out bad solutions. That is
>> almost magical.
>>
>>
>>
>> *Asim Roy:* "In optimization, we only report the best solution."
>>
>> *John Weng:* This is misconduct, if you hide bad-looking data, like
>> hiding all other students in your class.
>>
>> *Asim’s Response*: My god, how do I respond to this!!! That’s what we do
>> in the optimization field. No one ever told me or anyone else that
>> reporting the best solution is misconduct.
>>
>>
>>
>> *Asim Roy:* "There is no requirement to report any non-optimal
>> solutions."
>>
>> *John Weng:* This is not true for scientific papers and business reports.
>>
>> *Asim’s Response*: Again, how do I respond to that. We do that all the
>> time.
>>
>>
>>
>> *Asim Roy:* "If someone is doing part of the optimization manually,
>> post-hoc, there is nothing wrong with that either."
>>
>> *John Weng*: This is false because the so-called post-hoc solution did
>> not have a test!
>>
>> *Asim’s Response*: For Imagenet and other competitions, there is always
>> an independent test set. When we create our own data, we do
>> cross-validation and other kinds of random training and testing. What is he
>> talking about?
>>
>>
>>
>> John, this is my last response. You can post it in your newsletter, but
>> without any changes. I will not respond anymore. I think I have responded
>> to your fundamental argument, that post-selection is non-biological. It is
>> indeed biological and you have admitted that.
>>
>>
>>
>> Thanks,
>>
>> Asim Roy
>>
>> Professor, Information Systems
>>
>> Arizona State University
>>
>> Asim Roy | ASU Search <https://search.asu.edu/profile/9973>
>>
>> Lifeboat Foundation Bios: Professor Asim Roy
>> <https://urldefense.com/v3/__https:/lifeboat.com/ex/bios.asim.roy__;!!IKRxdwAv5BmarQ!aCvWF-PEaRtFT0lr5G-TVd1WSX7BloN_D524nbIUhctg9BC609q63-E91LYTCtXzoEQMZbkc5gnl53le6QZXPE1y$>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> *From:* Juyang Weng <juyang.weng at gmail.com>
>> *Sent:* Friday, July 5, 2024 6:23 PM
>> *To:* Asim Roy <ASIM.ROY at asu.edu>
>> *Cc:* Russell T. Harrison <r.t.harrison at ieee.org>; Akira Horose <
>> ahirose at ee.t.u-tokyo.ac.jp>; Hisao Ishibuchi <hisao at sustech.edu.cn>;
>> Simon See <ssee at nvidia.com>; Kenji Doya <doya at oist.jp>; Robert Kozma <
>> rkozma55 at gmail.com>; Simon See <Simon.CW.See at gmail.com>; Yaochu Jin <
>> Yaochu.Jin at surrey.ac.uk>; Xin Yao <xiny at sustech.edu.cn>;
>> amdnl at lists.cse.msu.edu; Danilo Mandic <d.mandic at imperial.ac.uk>; Irwin
>> King <irwinking at gmail.com>; Jose Principe <principe at cnel.ufl.edu>;
>> Marley Vellasco <marley at ele.puc-rio.br>
>> *Subject:* Re: False "Great Leap Forward" in AI
>>
>>
>>
>> Dear Asim,
>>
>> Thank you for your response, so that people on this email list can
>> get important benefits. The subject is very new. I can raise these
>> misconducts because we have a holistic solution to the 20
>> million-dollar problems.
>>
>> You wrote, "he does use an optimization method to weed out bad
>> solutions." This is false. DN does not weed out bad solutions, since it
>> has only one solution.
>>
>> You wrote, "In optimization, we only report the best solution." This
>> is misconduct, if you hide bad-looking data, like hiding all other students
>> in your class.
>>
>> You wrote, "There is no requirement to report any non-optimal
>> solutions." This is not true for scientific papers and business reports.
>>
>> You wrote, "If someone is doing part of the optimization manually,
>> post-hoc, there is nothing wrong with that either." This is false because
>> the so-called post-hoc solution did not have a test!
>>
>> You wrote, "In fact, there is plenty of evidence in biology that it
>> can create new circuits and reuse old circuits and cells/neurons. Thus,
>> throwing out bad solutions happens in biology too." This is irrelevant,
>> as your mother is not inside your skull, but a human programmer is doing
>> that inside the "skull."
>>
>> You wrote, "at a higher level, there’s natural selection and survival
>> of the fittest. So, building many solutions (networks) and picking the best
>> fits well with biology." As I wrote before, this is false, since biology
>> has built Aldof Hitler and many German soldiers who acted during the Second
>> World War. We report them, not hiding them.
>>
>> You wrote, "John calls this process `cheating' and a `misdeed".”
>> Yes, I still do.
>>
>> You wrote, "he claims his algorithm gets the globally optimal
>> solution, doesn’t get stuck in any local minima." This is true, since we
>> do not have a single objective function as you assumed. Such a single
>> objective function is a restricted environment or government. Instead,
>> the maximum likelihood computation in DN is conducted in a distributed way
>> by all neurons, each of them having its own maximum likelihood mechanism
>> (optimal Hebbain mechanism). Read a book, Juyang Weng, Natural and
>> Artificial Intelligence, available at Amonzon.
>>
>> You wrote, "If that is true, he should get far better results than
>> the folks who are “cheating” through post-selection." Off course, we did
>> as early as 2016. See "Luckiest from Post vs Single DN" in the attached
>> file 2024-06-30-IJCNN-Tutorial-1page.pdf. Furthermore, the luckiest
>> from the cheating is only a fitting error on the validation set (not test),
>> the single DN is a test error because DN does not fit the validation set.
>> The latter should not be compared with the former, but we compared with
>> them anyway.
>>
>> You wrote, "My hunch is, his algorithm falls short and can’t
>> compete with the other ones." Your hunch is wrong. See above as you
>> can see how wrong you are. DN is a lot better than even the false
>> performance.
>>
>> You wrote, "And that’s the reason for this outrage against others."
>> I am honest. All others should be honest too. Do not cheat like many
>> Chinese in the Great Leap Forward.
>>
>> You wrote, "I would again urge IEEE to take action against John
>> Weng for harassing plenary speakers at this conference and accusing them of
>> “misdeeds.” I am simply trying to exercise my freedom of speech driven
>> by my care for our community.
>>
>> Do you all see a "Great Leap Forward in AI" like the "Great Leap
>> Forward" in 1958 in China?
>>
>> Best regards,
>>
>> -John
>>
>>
>>
>> On Fri, Jul 5, 2024 at 9:01 AM Asim Roy <ASIM.ROY at asu.edu> wrote:
>>
>> Dear All,
>>
>>
>>
>> Without getting into the details of his DN algorithm, he does use an
>> optimization method to weed out bad solutions. In optimization, we only
>> report the best solution. There is no requirement to report any non-optimal
>> solutions. If someone is doing part of the optimization manually, post-hoc,
>> there is nothing wrong with that either. In fact, there is plenty of
>> evidence in biology that it can create new circuits and reuse old circuits
>> and cells/neurons. Thus, throwing out bad solutions happens in biology too.
>> And, of course, at a higher level, there’s natural selection and survival
>> of the fittest. So, building many solutions (networks) and picking the best
>> fits well with biology. However, John calls this process “cheating” and a
>> “misdeed.” He also claims having a strong background in biology. So he
>> should be aware of these processes.
>>
>>
>>
>> In addition, he claims his algorithm gets the globally optimal solution,
>> doesn’t get stuck in any local minima. If that is true, he should get far
>> better results than the folks who are “cheating” through post-selection. He
>> should be able to demonstrate his superior solutions through the public
>> competitions such as with Imagenet data. My hunch is, his algorithm falls
>> short and can’t compete with the other ones. And that’s the reason for this
>> outrage against others.
>>
>>
>>
>> I would again urge IEEE to take action against John Weng for harassing
>> plenary speakers at this conference and accusing them of “misdeeds.”
>>
>>
>>
>> Best,
>>
>> Asim
>>
>>
>>
>> *From:* Juyang Weng <juyang.weng at gmail.com>
>> *Sent:* Thursday, July 4, 2024 8:14 AM
>> *To:* Asim Roy <ASIM.ROY at asu.edu>
>> *Cc:* Russell T. Harrison <r.t.harrison at ieee.org>; Akira Horose <
>> ahirose at ee.t.u-tokyo.ac.jp>; Hisao Ishibuchi <hisao at sustech.edu.cn>;
>> Simon See <ssee at nvidia.com>; Kenji Doya <doya at oist.jp>; Robert Kozma <
>> rkozma55 at gmail.com>; Simon See <Simon.CW.See at gmail.com>; Yaochu Jin <
>> Yaochu.Jin at surrey.ac.uk>; Xin Yao <xiny at sustech.edu.cn>;
>> amdnl at lists.cse.msu.edu; Danilo Mandic <d.mandic at imperial.ac.uk>; Irwin
>> King <irwinking at gmail.com>
>> *Subject:* Re: False "Great Leap Forward" in AI
>>
>>
>>
>> Dear Asim and All,
>>
>> I am happy that Asim responded so that he gave us all an opportunity
>> to interactively participate in an academic discussion. We can defeat the
>> false "Great Leap Forward".
>>
>> During the banquet of July 3, 2024, I was trying to explain to Asim
>> why our Developmental Network (DN) only trains a single network, not
>> multiple networks as all other methods do (e.g., neural networks with
>> error-backprop, genetic algorithms, and fuzzy sets). (Let me know if there
>> are other methods where one network is optimal and therefore is free from
>> the local minima problem.)
>>
>> This single-network property is important because normally every
>> developmental network (genome) must succeed in single-network development,
>> from inception to birth, to death.
>>
>> Post-selection: A human programer trains multiple (n>1) predictors
>> based on a fit set F, and then picks up the luckiest predictor based on a
>> validation set (which is in the possession of the program). He suffers from
>> the following two misconducts:
>> Misconduct 1: Cheating in the absence of a test (because the test
>> set T is absent).
>>
>> Misconduct 2: Hiding bad-looking data (other less lucky predictors).
>>
>> A. I told Asim that DN tests its performance from birth to death,
>> across the entire life!
>>
>> B. I told Asim that DN does not hide any data because it trains a
>> single brain and reports all its lifetime errors!
>>
>> Asim did not read our DN papers that I sent to him, or did not read
>> them carefully, especially the proof of the maximum likelihood of DN-1.
>> See Weng IJIS 2015,
>> https://www.scirp.org/journal/paperinformation?paperid=53728
>> <https://urldefense.com/v3/__https:/www.scirp.org/journal/paperinformation?paperid=53728__;!!IKRxdwAv5BmarQ!aCvWF-PEaRtFT0lr5G-TVd1WSX7BloN_D524nbIUhctg9BC609q63-E91LYTCtXzoEQMZbkc5gnl53le6ch2TgX4$>
>> .
>>
>> At the banquet, I told Asim that the representation of DN is
>> "distributed" like the brain and it collectively computes the maximum
>> likelihood representation by very neuron using a limited resource and a
>> limited amount of life experience. I told him that every brain is
>> optimal, including his brain, my brain, and Aldolf Hitler's brain.
>> However, every brain has a different experience. However, Asim apparently
>> did not understand me and did not continue to ask what I meant by
>> "distributed" maximum likelihood representation. Namely, every neuron
>> incrementally computes the maximum likelihood representation of its own
>> competition zone.
>>
>> Asim gave an expression about the maximum likelihood implying that
>> every nonlinear objective function has many local minima! That seems to
>> be a lack of understanding of my proof in IJIS 2015.
>>
>> (1) I told Asim that every (positive) neuron computes its competitors
>> automatically (assisted by its dedicated negative neuron), so that every
>> (positive) neuron has a different set of (positive) neuronal competitors.
>> Because every neuron has a different competition zone, the maximum
>> likelihood representation is distributed.
>>
>> (2) Through the distributed computing by all (limited number of)
>> neurons that work together inside the DN, the DN computes the distributed
>> maximum likelihood representations. Namely, every (positive) neuron
>> computes its maximum likelihood representation incrementally for its unique
>> competition zone. This is proven in IJIS 2015, based on the
>> dual-optimality of Lobe Component Analysis. Through the proof, you can
>> see how LCA converts a highly nonlinear problem for each neuron into a
>> linear problem for each neuron, by defining observation as a
>> response-weighted input (i.e., dually-optimal Hebbian learning). Yes, with
>> this beautifully converted linear problem (inspired by the brain), neuronal
>> computation becomes computing an incremental mean through time in every
>> neuron. Therefore, a highly nonlinear problem of computing lobe components
>> becomes a linear one. We know that there is no local minima problem in
>> computing the mean of a time sequence.
>>
>> (3) As I presented in several of my IJCNN tutorials, neurons in DN
>> start from random weights, but different random weights lead to the same
>> network, because the initial weights only change the neuronal resources,
>> but not the resulting network.
>>
>> In summary, the equation that Asim listed is for each neuron, but
>> each neuron has a different instance of the expression. There is
>> no search, not that Asim implied (without saying)! This corresponds to a
>> holistic solution the 20-million dollar problems (i.e., the local
>> minuma problem solved by the maximum-likelihood optimality). See
>> https://ieeexplore.ieee.org/document/9892445
>> <https://urldefense.com/v3/__https:/ieeexplore.ieee.org/document/9892445__;!!IKRxdwAv5BmarQ!aCvWF-PEaRtFT0lr5G-TVd1WSX7BloN_D524nbIUhctg9BC609q63-E91LYTCtXzoEQMZbkc5gnl53le6cX7cdu2$>
>>
>> However, all other learning algorithms have not solved this local
>> minima problem. Therefore, they have to resort to trials and errors
>> through training many predictors.
>>
>> Do you have any more questions?
>> Best regards,
>>
>> -John
>>
>>
>>
>> On Thu, Jul 4, 2024 at 4:20 PM Asim Roy <ASIM.ROY at asu.edu> wrote:
>>
>> Dear All,
>>
>>
>>
>> There’s quite a bit of dishonesty here. John Weng can be accused of the
>> same “misconduct” that he is accusing others of. He didn’t quite disclose
>> what we discussed at the banquet last night. He is hiding all that.
>>
>>
>>
>> His basic argument is that we pick the best solution and report results
>> on that basis. In a sense, when you formulate a machine learning problem as
>> an optimization problem, that’s essentially what you are trying to do – get
>> the best solution and weed out the bad ones. And HE DOES THE SAME IN HIS
>> DEVELOPMENT NETWORK. When I asked him how his DN algorithm learns, he said
>> it uses the maximum likelihood method, which is an old statistical method (Maximum
>> likelihood estimation - Wikipedia
>> <https://urldefense.com/v3/__https:/en.wikipedia.org/wiki/Maximum_likelihood_estimation__;!!IKRxdwAv5BmarQ!aCvWF-PEaRtFT0lr5G-TVd1WSX7BloN_D524nbIUhctg9BC609q63-E91LYTCtXzoEQMZbkc5gnl53le6RquWOgs$>).
>> I quote from Wikipedia:
>>
>>
>>
>> The goal of maximum likelihood estimation is to find the values of the
>> model parameters that *maximize the likelihood function over the
>> parameter space*,[6]
>> <https://urldefense.com/v3/__https:/en.wikipedia.org/wiki/Maximum_likelihood_estimation*cite_note-:0-6__;Iw!!IKRxdwAv5BmarQ!aCvWF-PEaRtFT0lr5G-TVd1WSX7BloN_D524nbIUhctg9BC609q63-E91LYTCtXzoEQMZbkc5gnl53le6VGxpEh8$> that
>> is
>>
>> 𝜃^=argmax𝜃∈Θ𝐿𝑛(𝜃;𝑦) .[image: {\displaystyle {\hat {\theta
>> }}={\underset {\theta \in \Theta }{\operatorname {arg\;max} }}\,{\mathcal
>> {L}}_{n}(\theta \,;\mathbf {y} )~.}]
>>
>>
>>
>> So, by default, HE ALSO HIDES ALL THE BAD SOLUTIONS AND DOESN’T REPORT
>> THEM. He never talks about all of this. He never mentions that I had talked
>> about this in particular.
>>
>>
>>
>> I would suggest that based on his dishonest accusations against others
>> and, in particular, against one of the plenary speakers here at the
>> conference, that IEEE take some action against him. This nonsense has been
>> going on for a longtime and it’s time for some action.
>>
>>
>>
>> By the way, I am not a member of IEEE. I am expressing my opinion only
>> because he has falsely accused me also and I had enough of it. I have added
>> Danilo Mandic and Irwin King to the list.
>>
>>
>>
>> Thanks,
>>
>> Asim Roy
>>
>> Professor, Information Systems
>>
>> Arizona State University
>>
>> Asim Roy | ASU Search <https://search.asu.edu/profile/9973>
>>
>> Lifeboat Foundation Bios: Professor Asim Roy
>> <https://urldefense.com/v3/__https:/lifeboat.com/ex/bios.asim.roy__;!!IKRxdwAv5BmarQ!aCvWF-PEaRtFT0lr5G-TVd1WSX7BloN_D524nbIUhctg9BC609q63-E91LYTCtXzoEQMZbkc5gnl53le6QZXPE1y$>
>>
>>
>>
>> *From:* Juyang Weng <juyang.weng at gmail.com>
>> *Sent:* Wednesday, July 3, 2024 5:54 PM
>> *To:* Russell T. Harrison <r.t.harrison at ieee.org>
>> *Cc:* Akira Horose <ahirose at ee.t.u-tokyo.ac.jp>; Hisao Ishibuchi <
>> hisao at sustech.edu.cn>; Simon See <ssee at nvidia.com>; Kenji Doya <
>> doya at oist.jp>; Robert Kozma <rkozma55 at gmail.com>; Simon See <
>> Simon.CW.See at gmail.com>; Yaochu Jin <Yaochu.Jin at surrey.ac.uk>; Xin Yao <
>> xiny at sustech.edu.cn>; Asim Roy <ASIM.ROY at asu.edu>;
>> amdnl at lists.cse.msu.edu
>> *Subject:* False "Great Leap Forward" in AI
>>
>>
>>
>> Dear Asim,
>>
>> It is my great pleasure to finally have somebody who argued with me
>> about this important subject. I have attached the summary of this
>> important issue in pdf.
>>
>> I alleged widespread false data in AI from the following two
>> misconducts:
>> Misconduct 1: Cheating in the absence of a test.
>>
>> Misconduct 2: Hiding bad-looking data.
>>
>> The following is a series of events during WCCI 2024 in Yokohama Japan.
>>
>> These examples showed that some active researchers in the WCCI community
>> were probably not aware of the severity and urgency of the issue.
>> July 1, in public eyes, Robert Cozma banned the chance for Simon See
>> at NVidea to respond to my question pointing to a False "Great Leap
>> Forward" in AI.
>> July 1, Kenji Doya suggested something like "let misconduct go ahead
>> without a correction" because the publications are not cited. But he still
>> did not know that I alleged that AlphaFold as well as many almost all
>> published Google's deep learning products suffer from the same
>> Post-Selection misconduct.
>> July 1, Asim Roy said to me "We need to talk" but he did not stay
>> around to talk. I had a long debate during the Banquet last night. He
>> seems to imply that post-selections of few networks and hiding the
>> performance information of the entire population is "survival of the
>> fittest". He did not seem to agree that all 3 billion human populations
>> need to be taken into account in human evolution, at least a large number
>> of samples like in human sensus.
>> July 3, Yaochu Jin did not let me ask questions after a keynote talk.
>> Later he seemed to admit that many people in AI only report the data they
>> like.
>>
>> July 3, Kalanmoy Deb said that he just wanted to find a solution using
>> genetic algorithms but did not know that his so-called solution did not
>> have a test at all.
>>
>> July 1, I saw all books on the display on the Springer Table appear to
>> suffer from Post-Selection misconduct.
>>
>> Do we have a false data flooded "Great Leap Forward" in AI? Why?
>>
>> I welcome all those interested to discuss this important issue.
>> Best regards,
>> -John Weng
>> --
>>
>> Juyang (John) Weng
>>
>>
>>
>>
>> --
>>
>> Juyang (John) Weng
>>
>>
>>
>>
>> --
>>
>> Juyang (John) Weng
>>
>
>
> --
> Juyang (John) Weng
>
--
Juyang (John) Weng
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.cse.msu.edu/pipermail/amdnl/attachments/20240716/bc8c3dd7/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image001.png
Type: image/png
Size: 183 bytes
Desc: not available
URL: <http://lists.cse.msu.edu/pipermail/amdnl/attachments/20240716/bc8c3dd7/attachment-0001.png>
More information about the Amdnl
mailing list