TECHNOLOGY DESCRIPTION
Character rigging is the process where animators fit a skeleton to a 3D model, which then allows the animators to manipulate the movement of, or animate, the model. Currently, animators must manually define the skeleton’s joints, how they’re connected, and how the model’s body parts move. This is a very time intensive process that can take hours or days just for a single character. With the rapid rise in need for animation-ready characters and avatars in the areas of games, films, mixed reality and social media, character rigging is presently a major bottleneck to scaling the creation of animated characters.
In this work, the authors have written software called RigNet, which is an end-to-end automated method for producing animation rigs from input character models. By automating character rigging through RigNet, the character rigging process time is reduced from hours to minutes. RigNet is based on a deep learning architecture, trained on a large and diverse collection of rigged models, including their mesh, skeletons and corresponding skin weights. RigNet is able to predict both a unique skeleton and a skinning that match animator expectations, which is in contrast to prior art methods that just fit pre-defined, template skeletons to the 3D models that aren’t high enough quality. Finally, because all that is needed to use RigNet is a 3D model, animators do not need to be trained experts to rig characters, as is the case today.
ADVANTAGES
• Automates the rigging and thus creation of animated characters, reducing time from hours or days to minutes.
• Does not require specialized training to use, unlike the state-of-the-art for character rigging
• Creates high-quality, animator-ready characters, unlike other automated prior art methods
• Applicable to a wide diversity of character types
APPLICATIONS
• 3D animation and avatars
• Video games
• Film
• Mixed reality
• Social media
ABOUT THE LEAD INVENTOR
Professor Evangelos Kalogerakis' research deals with the development of graphics+vision algorithms and techniques, empowered by AI/ML, to help people to easily create and process representations of the 3D visual world. He is particularly interested in algorithms that generate 3D models of objects, scenes, animations, and intelligently process 3D scans, geometric data, collections of shapes, images, and video. His research is supported by NSF awards and donations from Adobe. He is currently Associate Professor at the College of Information and Computer Sciences at the University of Massachusetts Amherst (UMass Amherst), where he leads a group of students working on graphics+vision. He joined UMass Amherst in 2012. He was a postdoctoral researcher at Stanford University from 2010 to 2012. He obtained his PhD from the University of Toronto in 2010. He has served on technical paper committees for CVPR, ICCV, SIGGRAPH, SIGGRAPH ASIA, Eurographics, and Symposium on Geometry Processing. He currently serves as Associate Editor in the Editorial Boards of IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) and IEEE Transactions on Visualization & Computer Graphics (TVCG). He co-chaired the Shape Modeling International (SMI) conference in 2018. He was listed as one of the 100 most cited computer graphics scholars in the world between 2010-2020 by the Tsinghua's AMiner academic network.
AVAILABILITY:
Available for Licensing and/or Sponsored Research
DOCKET:
UMA 20-049
INTELLECTUAL PROPERTY STATUS:
Copyright
NON-CONFIDENTIAL INVENTION DISCLOSURE
LEAD INVENTOR:
Evangelos Kalogerakis
CONTACT: