<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body bgcolor="#FFFFFF" text="#000000">
Dear <b>C</b>olleagues:<br>
<br>
Happy New Year!<br>
<br>
BMI is pleased to announce the first Artificial Intelligence Machine
Learning (AIML) Contest in the BMI summer 2016 program. See below.<br>
<br>
<a
href="http://www.brain-mind-institute.org/program-summer-2016.html"><b>The
First Artificial Intelligence Machine Learning (AIML) Contest</b></a><b><br>
</b><a
href="http://www.brain-mind-institute.org/program-summer-2016.html"><b>BMI
Summer School</b><b> and the International Conference on
Brain-Mind 2016</b></a><br>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
<p><strong>Important dates</strong>: <br>
Monday March 14, 2016: recommendation of learning engines<br>
Monday April 11, 2016: deadline for advance registration of
contest entries<br>
Monday April 11, 2016: deadline for application of BMI
course-program admission<br>
Monday April 25, 2016: deadline for late registration of contest
entries <br>
Monday April 25, 2016: deadline for BMI course registration <br>
May 30 - June 17, 2016: distance learning course for the first
three weeks <br>
June 20 - July 8, 2016: distance learning course for the second
three weeks<br>
July 11 - July 29, 2016: distance learning course for the third
three weeks <br>
Aug. 1 - Aug. 14, 2016: workshops (free for all registered
players, distance or on site) <br>
Monday, August 15, 2016: Performance run by contest entries due by
noon<br>
August 20-21, 2016: ICDL 2016: Contest announcements, sponsor
awards, and contest presentations (on site and webcast)</p>
<p align="center"><strong>The 1st AIML Contest</strong> </p>
<p>The terms such as artificial intelligence, machine learning,
robotics, signal processing, control, dynamic systems, data
mining, big data, and brain projects, often had different emphases
but the related disciplines are converging. The Artificial
Intelligence Machine Learning (AIML) Contest serves as a
converging platform for all related disciplines and beyond. It is
open to, but not limited to, all researchers, practitioners,
students and investors. The main goal of the Contest is to
promote understanding of natural and artificial intelligence,
beyond the currently popular pattern classification. The Contest
aims to address major learning mechanisms in natural and
artificial intelligence, including perception, cognition, behavior
and motivation that occur in cluttered real-world environments.
Attention, segmentation, emergence of spatiotemporal
representations, and incremental scaffolding are part of each
life-long learning stream. </p>
<p>The major characteristics of this contest include:<br>
(1) Use inspirations from learning by natural brains, such as
grounding, emerging, natural inputs, incremental learning,
real-time and online, attention, motivation, and abstraction from
raw sensorimotor data. <br>
(2) General purpose learning engines. Learning engines will be
available to participants and open for additional learning
engines. The providers of learning engines are free to provide
assistants to participants, such as courses, tutorials, and
workshops. <br>
(3) Training-and-testing sensorimotor streams will be provided to
the participants. Each frame of the stream contains a sensory
vector and a motoric vector. Training and testing are mixed in
the streams, so that learning systems can perform scaffolding:
early learned simpler skills are automatically selected and used
for learning later more complex skills.<br>
(4) Major AI challenges will be tested, including vision,
audition, language understanding, and autonomous thinking. <br>
(5) The Contest is open to investors, charities, governments and
industrial supporters who like to contribute award funds and
provide assistance to their learning engines. </p>
<p>Rules: The entry of each contest is uniquely identified by the
name of the entry system. A system is developed by a team
consisting of one or multiple team members. A person can
participate in one or multiple teams. Although the format of
supplied streams is meant for incremental learning, at this first
year of the contest we allow teams to use either framewise
incremental or block-incremental learning, but the size of block
must be reported for contest. During block-incremental learning,
the system takes a block of b consecutive frames at a time, update
the system, and then discard the block. Framewise incremental
learning has a block size b=1 frame. Each system can also run
each training stream a few times as practice (epochs). The number
of epochs is also reported for the Contest. Entries are submitted
via Internet and no travel is a must. The Contest will provide
software interface for training-and-testing. Organizers of the
contest are ineligible for team members of any entry. </p>
<p>The International Conference on Brain-Mind (ICBM) 2016 will
feature Contest score announcement, sponsor rewards, and team
presentations. <br>
<br>
Criteria of performance in the following priority of importance (1
is the highest): <br>
(1) average error rates over all test points during epoch e, e =
1, 2, ...<br>
(2) the block size is as small as possible to reach a state-of-art
error rate.<br>
(3) the number of practice is as small as possible to reach a
state-of-art error rate. <br>
(4) the size of the network is as small as possible to reach a
state-of-art error rate.<br>
<br>
Within each stream, the following five types of substreams (each
contains multiple tasks and subtasks, skills and subskills) will
be trained and tested on but each team is not told which type a
substream is. It is a violation of the contest rules to manually
browse through the stream to find out what type a stream is. The
Contest software will record all the training and testing data. <br>
<br>
Type 1: Spatially non-attentive and non-temporal streams: many
components of a sensory frame are related to the next motoric
frame (e.g., the object of interest almost fills the entire image
and the next motoric frame contains the object type).
Non-temporal here means that a single frame is sufficient to
decide the next motor frame. This is similar to monolithic
pattern classification (e.g., image classification). But past
experience is useful for later learning within the same
training-and-testing stream. <br>
<br>
Type 2: Spatially attentive and non-temporal streams: a relatively
small number of components of a sensory frame are related to the
next motoric frames (e.g., the car to be recognized and detected
is in a large cluttered street scene where the next motoric frames
should contain the location, type, and scale of the attended car).
Type 2 is a spatial generalization of Type 1. This is like
object recognition and detection from cluttered dynamic scenes
conducted concurrently (where the next motoric frames provide
desired actions). Each sensory frame is not segmented but
internal automatic segmentation needs to be learned. Namely,
skills to find which image patch is related to the action in the
motoric frame need to be gradually learned from earlier learning
and refined in later learning within the same stream. The early
attention skills can be learned from motor vector (supervised
learning) and/or through reinforcement learning (pain and sweet
signals in sensory frames). The motoric frames may contain
action-supervision signals and the sensory frames may contain
components for reinforcement signals (rewards or punishment
components like pain receptors and sweet receptors). The
contents in each sensorimotor frame signal what learning modes are
needed. For example, a supplied action in a motoric vector calls
for supervised learning, a supplied pain signal in a sensory
vector calls for reinforcement learning, and the presence of both
calls for a combination of supervised learning and reinforcement
learning. <br>
<br>
Type 3: Spatially non-attentive and temporal steams: each motoric
frame is a function of not only the last sensory frame but also an
unknown number of earlier sensory frames. <br>
Each motoric frame corresponds to the temporal state/action. Type
3 is a temporal generalization of Type 1. This is like
recognizing sentences from a TV screen where the TV screen
presents one letter at a time. Again, past experience is useful
for later learning (e.g., learning individual letters and
punctuations, individual words, individual phrases, individual
sentences, etc. progressively, through a single long stream).<br>
<br>
Type 4: Spatially attentive and temporal steams: each motoric
frame is related to parts of recent sensory frames. Type 4 is the
temporal generalization of Types 2 and the spatial generalization
of Type 3. An example is recognizing and detecting the intent of
a car moving in a cluttered scene. Again, earlier experience is
useful for later learning (e.g., motion direction, motion
patterns, object type, object location, object orientation, etc.).<br>
<br>
Type 5: Generalization that requires certain amount of autonomous
thinking: the actions in the motoric frame require the system to
invent rules and use such rules on the fly within the same (long)
training-and-testing stream. Type 5 is the thinking generalization
of Type 4. Classical conditioning, instrumental conditioning,
autonomous reasoning, and autonomous planning are examples. <br>
<br>
Practice streams for training-and-testing will be provided by the
Contest early on. For the Contest, each entry is required to run
through a Contest Interface, which records the performance in real
time. The frame rate is around 10Hz in real time, but each entry
can run slower in virtual time. GPU is recommended but not
required. The information about the computer architecture should
be provided. Spatial and temporal computational complexities are
considered in Criteria (3) and (4). <br>
<br>
Open-Source Machine Learning Engines available: <br>
(1) Google TensorFlow <br>
(2) MSU Developmental Network (DN) <br>
(3) Submission or recommendation of learning engines for contest:
open till Monday March 14, 2016<br>
Each supplier or recommender of an engine is free to decide
courses and workshops below, but such assistance is recommended
but not required. <br>
<br>
Entries for contest: <br>
Advance registration deadline: Monday April 11, 2016.<br>
Registration: $270 per entry. Scores are measured based on
entries.<br>
Each team can register for multiple entries; a team can register
for multiple human participants; each participant can register for
multiple teams. <br>
The first name of each entry is waived of the $90 registration fee
for three courses/tutorials. <br>
Full-time student players: waived of tuition for courses/tutorials
other than the $90 registration fee. <br>
Every player to be officially recognized needs to register in the
Contest Registration Form. <br>
Course registration deadline: Monday April 25, 2016.<br>
<br>
Contest subject areas: Each entry chooses at least one of the
following four subjects: <br>
(1) vision, <br>
(2) audition (including speech and music recognition), <br>
(3) natural language understanding, <br>
(4) creative machine thinking for one of the above three areas or
more. <br>
Each entry can use one or more of the machine learning engines
provided by the Contest or elsewhere. <br>
Each entry can address one or more challenge areas. <br>
<br>
Each supplier of the Machine Learning Engines is encouraged to
provide courses or tutorials, via BMI or independently. <br>
BMI Courses or Tutorials for Machine Learning Contest engines:<br>
Application for BMI Admission: deadline: Monday April 11, 2016.<br>
Full time students: waive of tuition for courses<br>
Course registration deadline: Monday April 25, 2016.<br>
May 30 - June 17, 2016 (distance learning course for three weeks,
including <a
href="http://www.brain-mind-institute.org/bmi-831.html">BMI 831</a>)
<br>
June 20 - July 8, 2016 (distance learning course for three weeks,
including <a
href="http://www.brain-mind-institute.org/bmi-861.html">BMI 861</a>)
<br>
July 11 - July 29, 2016 (distance learning course for three weeks,
including <a
href="http://www.brain-mind-institute.org/bmi-871.html">BMI 871</a>)
<br>
Aug. 1 - Aug. 14, 2016 workshops (free for all registered
players). <br>
<br>
Contest entries due: noon, Monday, August 15, 2016. <br>
5-day evaluation. <br>
Contest award meeting and Contest presentations (ICDL 2016):
August 20-21, 2016. <br>
<br>
Award amount: $50,000, to be updated with sponsors. </p>
<p>Data: The training-and-testing streams will be provided. Many
machine learning techniques are for off-line, batch training,
batch testing, and task specific. They must be modified to take
the official training-and-testing streams for online training and
testing. Each stream consists of a single sequence of many time
frames; each time frame i contains a sensory frame X[i] and a
motoric frame Z[i]. Each motoric frame many include both
training data points and testing data points. If a motoric frame
that is marked * (free), it is a testing frame, absent of
training data. Namely, each stream is a synchronized
sensorimotor sequenced (X[i], Z[i]), i = 0, 1, 2, … n, where X[i]
and Z[i] are the sensory vector (e.g., image) and action vector
(state) at time i, both non-symbolic (numeric vector) to promote
fully automatic machine learning. Z[i] includes binary components
that represent abstract concepts of a spatiotemporal event (e.g.,
location concept, type concept, state concept of a sentence).
X[i] may include specified components as punishments and rewards
for action Z[i-1] or a few frames earlier (not too much delay that
confuses with earlier actions). There are two types of Z[i]’s,
supervised and free, respectively. Namely, free Z[i]’s are motor
vectors for test. Each Z[i] consists of a number of concept zones
[e.g., Z=(ZT, ZL,ZS), where ZT, ZL, ZS represent type zone,
location zone, and scale zone, respectively for the attended
object]. With each zone, only one neuron can fire at 1 and all
other neurons do not fire and take value 0. Within each stream,
past learned skills with early i’s is useful for later learning
with later i’s. </p>
<p>Contest: each entry runs a contest software provided by the BMI
for training-and-testing. The performance is recorded and
reported by the contest software automatically. </p>
<p align="center"><strong>BMI Summer School and ICBM 2006</strong> </p>
<p>2006 is the 5th year of BMI summer school. It is also the first
time that the summer school is jointly run with the AIML Contest
as past of the educational support of the Contest. </p>
</body>
</html>