[Bmi] The First Artificial Intelligence Machine Learning (AIML) Contest and other BMI 2016 programs
Juyang Weng
weng at cse.msu.edu
Thu Dec 31 18:49:30 EST 2015
Dear *C*olleagues:
Happy New Year!
BMI is pleased to announce the first Artificial Intelligence Machine
Learning (AIML) Contest in the BMI summer 2016 program. See below.
*The First Artificial Intelligence Machine Learning (AIML) Contest*
<http://www.brain-mind-institute.org/program-summer-2016.html>*
**BMI Summer School**and the International Conference on Brain-Mind
2016* <http://www.brain-mind-institute.org/program-summer-2016.html>
*Important dates*:
Monday March 14, 2016: recommendation of learning engines
Monday April 11, 2016: deadline for advance registration of contest entries
Monday April 11, 2016: deadline for application of BMI course-program
admission
Monday April 25, 2016: deadline for late registration of contest entries
Monday April 25, 2016: deadline for BMI course registration
May 30 - June 17, 2016: distance learning course for the first three weeks
June 20 - July 8, 2016: distance learning course for the second three weeks
July 11 - July 29, 2016: distance learning course for the third three weeks
Aug. 1 - Aug. 14, 2016: workshops (free for all registered players,
distance or on site)
Monday, August 15, 2016: Performance run by contest entries due by noon
August 20-21, 2016: ICDL 2016: Contest announcements, sponsor awards,
and contest presentations (on site and webcast)
*The 1st AIML Contest*
The terms such as artificial intelligence, machine learning, robotics,
signal processing, control, dynamic systems, data mining, big data, and
brain projects, often had different emphases but the related disciplines
are converging. The Artificial Intelligence Machine Learning (AIML)
Contest serves as a converging platform for all related disciplines and
beyond. It is open to, but not limited to, all researchers,
practitioners, students and investors. The main goal of the Contest is
to promote understanding of natural and artificial intelligence, beyond
the currently popular pattern classification. The Contest aims to
address major learning mechanisms in natural and artificial
intelligence, including perception, cognition, behavior and motivation
that occur in cluttered real-world environments. Attention,
segmentation, emergence of spatiotemporal representations, and
incremental scaffolding are part of each life-long learning stream.
The major characteristics of this contest include:
(1) Use inspirations from learning by natural brains, such as grounding,
emerging, natural inputs, incremental learning, real-time and online,
attention, motivation, and abstraction from raw sensorimotor data.
(2) General purpose learning engines. Learning engines will be
available to participants and open for additional learning engines. The
providers of learning engines are free to provide assistants to
participants, such as courses, tutorials, and workshops.
(3) Training-and-testing sensorimotor streams will be provided to the
participants. Each frame of the stream contains a sensory vector and a
motoric vector. Training and testing are mixed in the streams, so that
learning systems can perform scaffolding: early learned simpler skills
are automatically selected and used for learning later more complex skills.
(4) Major AI challenges will be tested, including vision, audition,
language understanding, and autonomous thinking.
(5) The Contest is open to investors, charities, governments and
industrial supporters who like to contribute award funds and provide
assistance to their learning engines.
Rules: The entry of each contest is uniquely identified by the name of
the entry system. A system is developed by a team consisting of one or
multiple team members. A person can participate in one or multiple
teams. Although the format of supplied streams is meant for incremental
learning, at this first year of the contest we allow teams to use either
framewise incremental or block-incremental learning, but the size of
block must be reported for contest. During block-incremental learning,
the system takes a block of b consecutive frames at a time, update the
system, and then discard the block. Framewise incremental learning has
a block size b=1 frame. Each system can also run each training stream a
few times as practice (epochs). The number of epochs is also reported
for the Contest. Entries are submitted via Internet and no travel is a
must. The Contest will provide software interface for
training-and-testing. Organizers of the contest are ineligible for
team members of any entry.
The International Conference on Brain-Mind (ICBM) 2016 will feature
Contest score announcement, sponsor rewards, and team presentations.
Criteria of performance in the following priority of importance (1 is
the highest):
(1) average error rates over all test points during epoch e, e = 1, 2, ...
(2) the block size is as small as possible to reach a state-of-art error
rate.
(3) the number of practice is as small as possible to reach a
state-of-art error rate.
(4) the size of the network is as small as possible to reach a
state-of-art error rate.
Within each stream, the following five types of substreams (each
contains multiple tasks and subtasks, skills and subskills) will be
trained and tested on but each team is not told which type a substream
is. It is a violation of the contest rules to manually browse through
the stream to find out what type a stream is. The Contest software will
record all the training and testing data.
Type 1: Spatially non-attentive and non-temporal streams: many
components of a sensory frame are related to the next motoric frame
(e.g., the object of interest almost fills the entire image and the next
motoric frame contains the object type). Non-temporal here means that a
single frame is sufficient to decide the next motor frame. This is
similar to monolithic pattern classification (e.g., image
classification). But past experience is useful for later learning within
the same training-and-testing stream.
Type 2: Spatially attentive and non-temporal streams: a relatively small
number of components of a sensory frame are related to the next motoric
frames (e.g., the car to be recognized and detected is in a large
cluttered street scene where the next motoric frames should contain the
location, type, and scale of the attended car). Type 2 is a spatial
generalization of Type 1. This is like object recognition and detection
from cluttered dynamic scenes conducted concurrently (where the next
motoric frames provide desired actions). Each sensory frame is not
segmented but internal automatic segmentation needs to be learned.
Namely, skills to find which image patch is related to the action in the
motoric frame need to be gradually learned from earlier learning and
refined in later learning within the same stream. The early attention
skills can be learned from motor vector (supervised learning) and/or
through reinforcement learning (pain and sweet signals in sensory
frames). The motoric frames may contain action-supervision signals and
the sensory frames may contain components for reinforcement signals
(rewards or punishment components like pain receptors and sweet
receptors). The contents in each sensorimotor frame signal what
learning modes are needed. For example, a supplied action in a motoric
vector calls for supervised learning, a supplied pain signal in a
sensory vector calls for reinforcement learning, and the presence of
both calls for a combination of supervised learning and reinforcement
learning.
Type 3: Spatially non-attentive and temporal steams: each motoric frame
is a function of not only the last sensory frame but also an unknown
number of earlier sensory frames.
Each motoric frame corresponds to the temporal state/action. Type 3 is
a temporal generalization of Type 1. This is like recognizing sentences
from a TV screen where the TV screen presents one letter at a time.
Again, past experience is useful for later learning (e.g., learning
individual letters and punctuations, individual words, individual
phrases, individual sentences, etc. progressively, through a single long
stream).
Type 4: Spatially attentive and temporal steams: each motoric frame is
related to parts of recent sensory frames. Type 4 is the temporal
generalization of Types 2 and the spatial generalization of Type 3. An
example is recognizing and detecting the intent of a car moving in a
cluttered scene. Again, earlier experience is useful for later
learning (e.g., motion direction, motion patterns, object type, object
location, object orientation, etc.).
Type 5: Generalization that requires certain amount of autonomous
thinking: the actions in the motoric frame require the system to invent
rules and use such rules on the fly within the same (long)
training-and-testing stream. Type 5 is the thinking generalization of
Type 4. Classical conditioning, instrumental conditioning, autonomous
reasoning, and autonomous planning are examples.
Practice streams for training-and-testing will be provided by the
Contest early on. For the Contest, each entry is required to run
through a Contest Interface, which records the performance in real
time. The frame rate is around 10Hz in real time, but each entry can
run slower in virtual time. GPU is recommended but not required. The
information about the computer architecture should be provided.
Spatial and temporal computational complexities are considered in
Criteria (3) and (4).
Open-Source Machine Learning Engines available:
(1) Google TensorFlow
(2) MSU Developmental Network (DN)
(3) Submission or recommendation of learning engines for contest: open
till Monday March 14, 2016
Each supplier or recommender of an engine is free to decide courses and
workshops below, but such assistance is recommended but not required.
Entries for contest:
Advance registration deadline: Monday April 11, 2016.
Registration: $270 per entry. Scores are measured based on entries.
Each team can register for multiple entries; a team can register for
multiple human participants; each participant can register for multiple
teams.
The first name of each entry is waived of the $90 registration fee for
three courses/tutorials.
Full-time student players: waived of tuition for courses/tutorials other
than the $90 registration fee.
Every player to be officially recognized needs to register in the
Contest Registration Form.
Course registration deadline: Monday April 25, 2016.
Contest subject areas: Each entry chooses at least one of the following
four subjects:
(1) vision,
(2) audition (including speech and music recognition),
(3) natural language understanding,
(4) creative machine thinking for one of the above three areas or more.
Each entry can use one or more of the machine learning engines provided
by the Contest or elsewhere.
Each entry can address one or more challenge areas.
Each supplier of the Machine Learning Engines is encouraged to provide
courses or tutorials, via BMI or independently.
BMI Courses or Tutorials for Machine Learning Contest engines:
Application for BMI Admission: deadline: Monday April 11, 2016.
Full time students: waive of tuition for courses
Course registration deadline: Monday April 25, 2016.
May 30 - June 17, 2016 (distance learning course for three weeks,
including BMI 831 <http://www.brain-mind-institute.org/bmi-831.html>)
June 20 - July 8, 2016 (distance learning course for three weeks,
including BMI 861 <http://www.brain-mind-institute.org/bmi-861.html>)
July 11 - July 29, 2016 (distance learning course for three weeks,
including BMI 871 <http://www.brain-mind-institute.org/bmi-871.html>)
Aug. 1 - Aug. 14, 2016 workshops (free for all registered players).
Contest entries due: noon, Monday, August 15, 2016.
5-day evaluation.
Contest award meeting and Contest presentations (ICDL 2016): August
20-21, 2016.
Award amount: $50,000, to be updated with sponsors.
Data: The training-and-testing streams will be provided. Many machine
learning techniques are for off-line, batch training, batch testing, and
task specific. They must be modified to take the official
training-and-testing streams for online training and testing. Each
stream consists of a single sequence of many time frames; each time
frame i contains a sensory frame X[i] and a motoric frame Z[i]. Each
motoric frame many include both training data points and testing data
points. If a motoric frame that is marked * (free), it is a testing
frame, absent of training data. Namely, each stream is a synchronized
sensorimotor sequenced (X[i], Z[i]), i = 0, 1, 2, … n, where X[i] and
Z[i] are the sensory vector (e.g., image) and action vector (state) at
time i, both non-symbolic (numeric vector) to promote fully automatic
machine learning. Z[i] includes binary components that represent
abstract concepts of a spatiotemporal event (e.g., location concept,
type concept, state concept of a sentence). X[i] may include specified
components as punishments and rewards for action Z[i-1] or a few frames
earlier (not too much delay that confuses with earlier actions). There
are two types of Z[i]’s, supervised and free, respectively. Namely,
free Z[i]’s are motor vectors for test. Each Z[i] consists of a number
of concept zones [e.g., Z=(ZT, ZL,ZS), where ZT, ZL, ZS represent type
zone, location zone, and scale zone, respectively for the attended
object]. With each zone, only one neuron can fire at 1 and all other
neurons do not fire and take value 0. Within each stream, past learned
skills with early i’s is useful for later learning with later i’s.
Contest: each entry runs a contest software provided by the BMI for
training-and-testing. The performance is recorded and reported by the
contest software automatically.
*BMI Summer School and ICBM 2006*
2006 is the 5th year of BMI summer school. It is also the first time
that the summer school is jointly run with the AIML Contest as past of
the educational support of the Contest.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.cse.msu.edu/pipermail/bmi/attachments/20151231/244088dc/attachment-0001.html>
More information about the BMI
mailing list