|
Plenary Lecture
Intelligent Human Interaction based on Mental Cloning
Professor Hamido Fujita
Director of Intelligent Software Laboratory
Iwate Prefectural University (IPU)
Iwate, Japan
E-mail:
HFujita-799@acm.org
Abstract:
This plenary Lecture; is to high light on the importance
of human nature collective behavior on intelligent
interaction between man and machine. We have
investigated on different disciplinary, (philosophical,
physiological, cultural, physiognomy, and technical)
views that collectively reflect the behavioral reasoning
of human emotional feature interaction with machines (
i.e., computer), this is essential to have mutual
effective engagement between human and machine based on
observing and examining the user from different views
reflecting the emotional behavior of the user. The
system will be sensitive to emotion related attributes
that through integrated conceptual views, representing
these attributes, we can be able grasp the emotional
transition state of user engagement with the system.
Emotion recognition is one of the most important
components of emotional intelligence and it has a direct
effect on our ability to make optimal decisions (along
with the ability to utilize emotions to make decisions),
any attempt by computer scientists to model human
interaction should, at least in part, be founded on an
accurate identification of affective states. It is
suggested that by ignoring the emotional component
intrinsic to human decision-making, we have been missing
valuable information that could potentially lead to
inadequate interactive models.
These concepts are the basis of what we called mental
cloning. A concept introduced by me through a project.
This project is to establish a system as a virtual world
to re-create a Miyzawa Kenji virtual world (famous
Japanese writer dead on 1930) based on cognitive model
of his personality and inner thinking). However, the
Kenji system is currently be modified and adapted into a
health care system, that this plenary lecture is trying
to introduce.
The objective is to have users (i.e., patients) who are
attending a hospital (or they can do it from their home
using computer link), to do all transaction of 1st level
diagnosis before going to the actual health examination.
In this level, based on mental cloning of medical
doctors in that hospital, and based on their previous
case studies, and experiences to examine patients the
system would practice this diagnosis on patient as if
the actual doctor are doing. All doctor cases studies
have been collected and categorized into the system
according to levels and type. The solutions or induced
scenarios by the virtual doctors to the patients have
been abstracted to distinguishing its central part
(primary) from surrounding (secondary) parts. It first
finds the solution of the central part, and then refines
the solution by considering the secondary related parts.
Medical doctors’ knowledge has been classified according
to categories. The system is been divided into our
related parts.
The 1st part is to create a hologram (or virtual 3D face
on a display) that produce emotional character of a
certain human defined personality, we use in this
experiment actual employed medical doctor, the system
will produce generated animated face emotionally talk
and act as the medical doctor themselves, and who are
currently working in the hospital. These animated
characters reflecting the main interface the patient
would look through and through it the virtual medical
doctor would establish the best engagement to extract
the current status of the subject patient. This part is
working with part 4 of the system. Together part 1 and
part 4 represent the mental cloning of the medical
doctor.
The 2nd part reflects the interaction of user (patient)
emotional engagement states, by observing the user
mental transition states (i.e., trace), that been
recorded and analyzed by Active Appearance model system
through a camera. A high resolution camera would collect
images from the user (along with voice as in part 3).
These frame streams of video are analyzed though what is
called as active appearance model, so the system would
collect user mental engagement with the doctor and
accordingly, can estimate the user appearance state.
The 3rd part is related to the voice reasoning, to
produce with emotion a voice reflecting the context in
hand, and to recognize it as it heard from the observed
user. Also, this part would produce the output voice of
the virtual medical doctor to speak emotionally the
generated (in part 4) scenarios.
Through Part 2 and Part 3 information is collected by
the system (virtual doctor), to create a cognitive model
of the subject patient. Such that to create the problem
space that the system would use to navigate to the best
match and accordingly the best scenario to use for
diagnosis. When a user talk, the face emotional states,
along with emotional states and the words are recognized
by the system as information through it the system would
find the best scenarios and corresponding cognitive
model to use for interacting with the subject user.
The 4th part is to produce the synthesis of the scenario
that make the user and system been actively engaged.
This would be based on creating a cognitive interaction
between the human subject and the system based on
transition analysis.
This lecture is to bring into the audience the needs for
such way of metaphoric thinking to bring user emotional
status view to be part of the design views that to be
integrated with other parts of the system.
Brief Biography of the Speaker:
Dr. Hamido Fujita, is a professor at Iwate Prefectural
University(IPU), Iwate, Japan.
He is the director of Intelligent Software Laboratory.
He took his Ph.D from Tohoku University, Sendai, Japan
on 1988.
He worked at Tohoku University as visiting Professor on
late eighties, and then joined University of Tokyo,
RCAST as Associate Professor, on 1990_1993, and then he
moved to Canada, as visiting Professor at the University
of Montreal, IRO, till 1997.
He then moved to Japan to become a committee member to
establish Iwate Prefectural University on 1997. Then
after he joined Iwate Prefectural University (IPU),
Faculty of Software and Information Science, as
professor and head of Information System Division. He is
directing at IPU two laboratories, Intelligent Software
Laboratory and Cognitive Systems Laboratory. He was a
committee of Establishing Graduate School of Software
Science, of IPU.
He has directed and led many project sponsored by the
Ministry of Science, Education and Culture of Japan, and
others from International sponsors and Japanese company
sponsors project on new software methodologies.
Also, he is the founder of SOMET organization.
He published many books and journal papers, and
participated as speaker in many conferences worldwide.
Also, he gave invited talks at many universities in EU,
and North America. He has supervised Ph.D students
jointly with University of Laval, University Technology,
Syndey(UTS), He is also Professor at the University of
Laval, Quebec, Canada supervising Graduate Studies
students, he was a visiting Professor CRI at the
University of Paris_1, Sorbonne, 2003~2004, working with
Prof. Colette Rolland. He worked as opponent for
Stockholm University, Sweden co-supervised students with
Prof. Love Ekenberg He also worked with UTS, CCS group
led by Prof. Ernest Edmonds and co-supervised Ph.D
students. He published books in IOS press. He guest
edited several special issues on International Journal
of Knowledge based systems, Elsevier. Also, he has
editor role in this journal since 2008. Also, he a guest
edited Transaction of Internet Research,
He is currently heading a cognitive Miyzaza Kenji
project in Intelligent HCI, and a project related to
Mental Cloning as an intelligent user interface between
human user and computers, supported by MEXT (Ministry of
Education, Culture, Sports, Science and Technology), and
SCOPE project on Virtual Doctor Systems, supported by
Ministry of Internal Affairs, and Communications of
Japan.
|
|