Conferences de Demetri Terzopoulos lors de son sejour au CEREMADE
comme professeur invite les 12, 14, 19, 25 et 28 Septembre 2006.
Bio: Demetri Terzopoulos is the Chancellor's Professor of Computer
Science at UCLA. He graduated from McGill University and was awarded
the PhD degree by MIT in 1984. He is a Fellow of the IEEE, a Fellow of
the Royal Society of Canada, and a member of the European Academy of
Sciences. His many awards include an Academy Award for Technical
Achievement (a Technical Oscar) from the Academy of Motion Picture
Arts and Sciences for his pioneering work on physics-based computer
animation. He is one of the most highly-cited computer scientists and
engineers in the world, with approximately 300 published research
papers and several volumes, primarily in computer graphics, computer
vision, medical imaging, computer-aided design, and artificial
intelligence/life.
Professor Terzopoulos is one of the most well known scientists in the domains of computer vision
and computer graphics. He is among the top 100 scientists in terms of citations in all domains of science and engineering.
Mardi 12 Septembre 2006, 14h30, Salle A 711
A Tensor Algebraic Framework for Image Synthesis, Analysis and Recognition
Demetri Terzopoulos
University of California, Los Angeles
We introduce a multilinear (tensor) algebraic framework for image
synthesis, analysis, and recognition. Natural images result from the
multifactor interaction between the imaging process, the illumination,
and the scene geometry. Numerical multilinear algebra provides a
principled approach to disentangling and explicitly representing the
essential factors or modes of image ensembles. Our multilinear image
modeling technique employs a tensor extension of the conventional
matrix singular value decomposition (SVD), known as the N-mode SVD.
This leads us to a multilinear generalization of principal components
analysis (PCA) and a novel multilinear generalization of independent
components analysis (ICA). As example applications, we tackle
currently important problems in computer graphics, computer vision,
and pattern recognition, in particular, image-based rendering,
specifically the multilinear synthesis of images of textured surfaces
for varying viewpoint and illumination, as well as the multilinear
analysis and recognition of facial images under variable face shape,
view, and illumination conditions.
Jeudi 14 Septembre 2006, 14h30, Salle A 711
Biomechanical Modeling and Neuromuscular Control of the Face-Head-Neck
System
Demetri Terzopoulos
University of California, Los Angeles
Facial animation has a lengthy history in computer graphics. To date,
most efforts have concentrated either on labor-intensive keyframe or
motion capture animation schemes. As an alternative, we advocate the
highly automated animation of faces using physics-based and behavioral
animation methods. To this end, we develop a biomechanical model of
the face, which includes synthetic facial soft tissues with embedded
muscle actuators. Despite its sophistication, our facial model can
nonetheless be simulated in real time on a high-end PC. The model
incorporates a motor control layer that automatically coordinates eye
and head movements, as well as muscle contractions to produce natural
expressions. We augment the synthetic face with a perception model
that affords it a visual awareness of its environment, and we provide
a sensorimotor response mechanism that links percepts to meaningful
actions. Unlike the human face, the neck has been largely overlooked
in the computer graphics literature, this despite its complex
anatomical structure and the important role that it plays in
supporting the head in balance while generating the controlled head
movements that are essential to so many aspects of human behavior. We
introduce a biomechanical model of the human head-neck system.
Emulating the relevant anatomy, our model is characterized by
appropriate kinematic redundancy (7 cervical vertebrae coupled by
3-DOF joints) and muscle actuator redundancy (72 neck muscles arranged
in 3 muscle layers). This anatomically consistent biomechanical model
confronts us with a challenging motor control problem, even for the
relatively simple task of balancing the mass of the head in gravity
atop the cervical spine. We develop a neuromuscular control model for
human head animation that emulates the relevant biological motor
control mechanisms. Employing machine learning techniques, the
controller's neural networks are trained offline to efficiently
generate online control signals for the autonomous behavioral
animation of the human head and face.
Mardi 19 Septembre,
14h dans le cadre du congres MIA06
Mathematics and Image
Analysis 2006
http://www.ceremade.dauphine.fr/~cohen/mia2006/
Deformable and Functional Models in Medical Image Analysis
The
modeling of biological structures and the model-based interpretation
of medical images present many challenging problems. I will present a
powerful paradigm known as deformable models, which combines
geometry, computational physics, and estimation theory. Deformable
models evolve in response to simulated forces as dictated by the
continuum mechanical principles of flexible materials, expressed
mathematically via variational principles and PDEs. The talk will
focus on several biomedical applications, including image
segmentation using dynamic finite element and topologically adaptive
deformable models, as well as recent work on "deformable
organisms" which aims more fully to automate the segmentation
process by augmenting deformable models with behavioral and cognitive
control mechanisms. I will also discuss the recent trend towards
functional modeling, such as craniofacial models that include the
biomechanical modeling of facial tissues and muscles of facial
expression.
Lundi 25 Septembre 2006, 14h, Salle A 709
Artificial Animals and Humans: From Physics to Intelligence
Demetri Terzopoulos
University of California, Los Angeles
The confluence of virtual reality and artificial life, an emerging
discipline that spans the computational and biological sciences, has
yielded synthetic worlds inhabited by realistic, artificial flora and
fauna. Artificial animals are complex synthetic organisms that possess
functional biomechanical bodies, perceptual sensors, and brains with
locomotion, perception, behavior, learning, and cognition centers.
Artificial humans and lower animals are of interest in computer
graphics because they are self-animating graphical characters that can
dramatically advance the state of the art of production animation and
interactive game technologies. More broadly, these biomimetic
autonomous agents in realistic virtual worlds also foster deeper
computationally oriented insights into natural living systems. In
addition, they engender interesting applications in computer vision,
sensor networks, and other domains.
Jeudi 28 Septembre 2006, 14h, Salle A 709
Virtual Vision: Human Simulation and Visual Sensor Networks
Demetri Terzopoulos
University of California, Los Angeles
Virtual vision is a fledgling paradigm for computer vision research,
which exploits computer graphics and realistic virtual worlds. This
lecture will be in two parts. First, we address the challenging
problem of emulating the rich complexity of real pedestrians in urban
environments. Our artificial life approach integrates motor,
perceptual, behavioral, and cognitive components within a
comprehensive model of pedestrians as individuals, yielding
unprecedented fidelity and complexity for fully autonomous multi-human
simulation in a large urban environment. We represent the environment
using hierarchical data structures, which efficiently support the
perceptual queries that influence the behavioral responses of the
autonomous pedestrians and sustain their ability to plan their actions
on local and global scales. Second, we explore the use of this
visually and behaviorally realistic simulator in the development and
testing of visual surveillance systems. Our research would be more or
less infeasible in the real world given the impediments to deploying
and experimenting with an appropriately complex camera sensor network
in a large public space the size of, say, an airport or train station.
In particular, we develop and experiment with surveillance systems in
a virtual train station environment populated by autonomous, lifelike
virtual pedestrians, wherein easily reconfigurable virtual cameras
generate synthetic video feeds that emulate those generated by real
surveillance cameras monitoring richly populated public spaces.