We are researching novel technologies based on artificial intelligence (AI) such as machine learning and neural network, for smartly generating, editing, and analyzing visual graphical contents such as animations or illustrations. With our technologies, a novel virtual world can be emerged in a richer manner, which is also bridged with a real world using smart devices of illuminations or fabrications. The various kinds of projects are ongoing, which are categorized below:


Research

Member

Faculty

Graduate (Master second degree)

  • Haruka Takahashi
  • Hiroki Takahashi
  • Ho Quan Xiu
  • Kurei Fujiwara
  • Shunsuke Wada

Graduate (Master first degree)

  • Chanvongnaraz Khampasith
  • Keiichi Ito
  • Naoki Takahashi
  • Toshiki Tamura
  • Tomoyasu Futaba
  • Yamato Koke

Undergraduate

  • Kashiko Aoyama
  • Shintaro Koketsu
  • Tetsuya Shuyama
  • Tsubasa Wakaiki
  • Takehiro Izumi (B3)
  • Wataru Kondo (B3, Nara-Kosen)
  • Eya Khedher (Research student)

Secretary

  • Hiroko Yokoyama

OB/OG

Former Faculty

  • Yuki Endo (University of Tsukuba)
  • Takao Jinno (Osaka Institute of Technology)
  • Tomohiko Mukai (Tokyo Metropolitan University)
  • Toyohisa Kaneko (Honorary Professor)
  • Kimiya Aoki (Chukyo University)
  • Koichi Hirota (The University of Elector-Communications)

Alumni of Academic Potision

  • Takeshi Saitoh (Kyushu Institute of Technology)
  • Tomohiko Mukai (Tokyo Metropolitan University)
  • Yuya Iwakiri (National Institute of Technology, Hiroshima College)
  • Yohei Iwasaki (National Institute of Technology, Kochi College)

Visiting Professors / Research Students

  • Yohei Iwasaki (NIT, Kochi College)
  • Keiichi Sato (NIT, Hakodate College)
  • Janaka Rajapakse (Tainan National University of the Arts)
  • Yi Li (LAAS-CNRS)
  • Xiuzhuo Wang
  • Heekyung Kim
  • Jae-sung Hong

Current Projects

Machine learning for human motions and animations

Style retarget of gestural motions

This technology enhances the expressiveness of gestures performed by virtual characters (avatars) by synthesizing or retargeting stylized motions in realtime.

Visualization and adaptation for motion trainings (Collaboartion with Prof. Mukai at TMU and Prof. Oshita at KIT)

We are developing the system for visualizing the features of sports or skillful motions or adaptively generating complicated motions, by collaborating with Tokyo Metropolitan University and Kyushu Institute of Technology.

Development and evaluation of a self-training system for tennis shots with motion feature assessment and visualization (The Visual Computer)
Motion Adaptation with Cascaded Inequality Tasks (SIGGRAPH Conference on Motion, Interaction and Games 2019)

Image Analysis for Color-Illuminated Scenes

Estimation of lighting colors

An image analysis technique was developed for estimating the colors of multiple lighting illuminating the scenes from a single image, by introducing color-line theory.

Estimation of Multiple Illuminant Colors Using Color Lines of Single Image

From image to optical signals

Smart image conversion technique is developed by using color mapping based on color enhancement method and perceptual experiment.

Perceptual Color Enhancement for LED Illuminations

Illustration for Fabrication and Artwork

Image conversion for machine embroidery

Smart image conversion for embroidery of Kanji-character is developed using SOTA of DNN technologies, which tries to imitate the aesthetic expressions created by skilled craftsman.


Past Projects

Informatics for Illustrative Images

Style-Based Retrieval of Illustrations

Retrieval, classification, and ranking methodologies based on the drawing styles of illustrative images. The intuitive and automatic conversions and transformations of style are also investigated for both raster and vector images.

Keynote speech: Style-Based Content Exploration for Aesthetic Media Informatics (VRCAI 2015)
An Unsupervised Approach for Comparing Styles of Illustrations, CBMI2015 [Best Paper Award]

Perceptual metric for down-scaled illustrations

New perceptual metric is developed for estimating image quality of down-scaled illustrative images, which is useful in obtaining icons or pixel arts by down-sampling illustrative images.


Smart Lighting Controls

Lighting controls for impressive portrait

The image-based monitoring and controls with mobile lighting robot is developed, by analyzing the lighting conditions from a portrait image.

Shade Analysis on Facial Images for Robotic Lighting

Restoration of lighting colors

An image synthesis technique was developed for authentically restoring the color of indoor scenes illuminated with color lighting systems. HDR images are smartly converted on the basis of the color appearance model of human vision system.

Restoration of color appearance by combining local adaptations for HDR images, AIC2015

Image-based multiple lighting controls with a Web interface

Smart lighting methodologies was developed with sensor-attached digitally-controllable color LED unit. The smart and energy-saving dimming control is implemented by introducing an intrinsic image analysis and a Web-based control interface.


Visual Code & Optical Image Communications

Ubiquitous communications via optical spatio-temporal data processing

The pattern recognition technique for optical variations along space and time is explored with image sensors of commercial smartphones or handy mobile devices such as tablets or smart watches. Data communication systems with color LEDs are developed as an application of this optical pattern recognitions.

Data-Embeddable Texture Synthesis

SmartGraphics 2007

A method of synthesizing texture images was developed for embedding arbitrary data. It introduces the smart techniques of generating repetitive texture patterns through feature learning of a sample image. A synthesized image can effectively conceal the embedded pattern, and the pattern can be robustly detected from a photographed image of mobile devices such as smart phones and tablets.
Texture Synthesis for Mobile Data Communications, IEEE Computer Graphics and Applications


Humanoid Animations

Smart Skin Deformations

We have developed an efficient method of deforming human skin by skeletal motions, by introducing state-of-the-art sparsity model called Nuclear Norm minimization.
This project utilizes this model for interactive classification and synthesis of motion data.
Efficient Dynamic Skinning with Low-Rank Helper Bone Controllers, ACM Transactions on Graphics (SIGGRAPH 2016)

Pose-Timeline for Propagating Motion Edits

Symposium on Computer Animation 2009

A motion editing interface was developed for efficiently and flexibly editing the sequence of iterative actions by a few intuitive operations. It can visualize a motion sequence on a summary timeline with editable pose-icons, and drag-and-drop operations on the timeline enable intuitive controls of temporal properties of the motion such as timing, duration, and coordination.
Pose-Timeline for Propagating Motion Edits, Symposium on Computer Animation (SCA 2009)
keyword: pose-timeline, edit propagation, motion re-timing, motion style transfer

Multilinear Motion Synthesis with Level of Detail Controls

Pacific Graphics 2007

A hybrid algorithm was developed to optimize the reduction size and computational time, according to the distance from the camera while maintaining visual quality. It can provide a practical tool for creating an interactive animation of many characters while ensuring accurate and flexible controls at a modest level of computational cost.
Multilinear motion synthesis with level-of-detail controls, Pacific Graphics 2007
keyword: motion interpolation, multilinear analysis, level-of-detail control

Geostatistical Motion Interpolation

SIGGRAPH 2005

Motion interpolations were developed with statistical predictions of missing data in an arbitrarily definable parametric space. A practical technique of geostatistics, called universal kriging, was introduced for statistically estimating the correlations between the dissimilarity of motions and the distance in the parametric space. It can statistically optimize interpolation kernels for given parameters at each frame, using a pose distance metric to efficiently analyze the correlation.
Geostatistical Motion Interpolation, ACM Transactions on Graphics (SIGGRAPH 2005)
SIGGRAPH demo movie
keyword: motion interpolation, geostatistics, kriging, variogram

Autonomous, Explorative Motion Generation

IEICE 2004

Explorative synthesis of human motions was developed using hierarchical re-inforcemant learning for searching plausible poses with end-effector's constraints.

Extensive and Efficient Search of Human Movements with Hierarchical Reinforcement Learning, Computer Animation 2002 (CA 2002)

keyword: keyframe animation, hierarchical reinforcement learning

Neural Gait Generator

IEICE 2004

Coupled neural cell models was introduced for generating oscillation signals to synthesize various gait motions that are self-stabilizable for dynamical environment.

Physiological Gaits Controls with a Neural Pattern Generator, The Journal of Visualization and Computer Animation

keyword: neural oscillator, central pattern generator (CPG)


Retrieval of Motion Capture Data

Smart Manipulations of Massive Motion Data

SCA 2004

Self-organizing map was introduced to visualize massive motion capture data while interactively retrieving motion clips with this map.

Motion Map: Image-based Retrieval and Segmentation of Motion Data, Symposium on Computer Animation (SCA2004)

keyword: motion clip retrieval, self organizing map (SOM)

Generating Concise Rules for Retrieving Human Motions from Large Datasets

Computer Animation and Social Agents 2009

A method for retrieving human motion data was developed with concise retrieval rules based on the spatio-temporal features of motion appearance. It converts motion clip into a form of clause language that represents geometrical relations between body parts and their temporal relationship. A retrieval rule is then learned from the set of manually classified examples using inductive logic programming.
Generating Concise Rules for Retrieving Human Motions from Large Datasets, Computer Animation and Social Agents (CASA2009)
keyword: motion retrieval, inductive logic programming, appearance feature


Behavior Simulations

Psychology-based Crowd Simulation

SCA 2004

A gait simulation was developed based on psychological model of personal space and group intelligence for very crowded environment.

Psychological Model for Animating Crowded, Computer Animation and Virtual Worlds (Special issue CASA 2005)

keyword: crowded pedestrian, psychological model, personal space, virtual memory, locomotion graph

Web-based Behavior Simulation

SIGGRAPH 2003 Web Graphics

A Web-based system for simulating human behaviors was developed with motion capture data and XML-based scenario of behaviors. Java-based middleware was developed with the financial support by Mitou software project of IPA.

Extensible Task Simulation with Motion Archive, Transactions on Information and Systems of IEICE
Extensible Behavior Simulation with Motion Archive, SIGGRAPH2003 Web Graphics

keyword: web3D, xml, simulation middleware