面部AR遥控器:使用AR制作动画

阅读: 评论:0

⾯部AR遥控器:使⽤AR制作动画
With the release of ARKit and the iPhone X paired with Unity, developers have an easy-to-use set of tools to create beautiful and expressive characters. This opens up exploring the magic of real-time puppeteering for the upcoming “Windup” animated short, directed by Yibing Jiang.
随着ARKit的发布以及与Unity配对的iPhone X,开发⼈员拥有了⼀套易于使⽤的⼯具来创建漂亮⽽富有表现⼒的⾓⾊。 这将为即将到来的动画短⽚《 Windup》(由姜⼀冰执导)探索实时伪装的魔⼒。
Unity Labs and the team behind “Windup” have come together to see how far we could push Unity’s ability to capture facial animation in real time on a cinematic character. We also enlisted the help of Roja Huchez of Beast House FX for modeling and rigging of the blend shapes to help bring the character expressions to life.
Unity实验室和“ Windup”背后的团队聚在⼀起,探讨了我们可以将Unity的实时捕捉⼈像动画⾓⾊的能⼒提⾼到多远。 我们还寻求Beast House FX的Roja Huchez的帮助,对混合形状进⾏建模和装配,以使⾓⾊表情栩栩如⽣。
What the team created is Facial AR Remote, a low-overhead way to capture performance using a conn
ected device directly into the Unity editor. We found using the Remote’s workflow is useful not just for animation authoring, but also for character and blend shape modeling and rigging, creating a streamlined way to build your own animoji or memoji type interactions in Unity. This allows developers to be able to iterate on the model in the editor without needing to build to the device, removing time-consuming steps in the process.
团队创建的是Facial AR Remote,这是⼀种低开销的⽅法,可以使⽤连接的设备直接在Unity编辑器中捕获性能。 我们发现使⽤Remote 的⼯作流程不仅对动画创作有⽤,⽽且对⾓⾊和混合形状建模和绑定也有⽤,创建了⼀种简化的⽅法来在Unity中构建⾃⼰的动画或备忘录类型的交互。 这使开发⼈员能够在编辑器中迭代模型,⽽⽆需构建到设备上,从⽽消除了过程中耗时的步骤。
为什么要构建⾯部AR遥控器 (Why build the Facial AR Remote)
We saw an opportunity to build new animation tools for film projects opening up a future of real-time animation in Unity. There was also a “cool factor” in using AR tools for authoring and an opportunity to continue to push Unity’s real-time rendering. As soon as we had the basics working with data coming from the phone to the editor, our team and everyone around our desks could not stop having fun puppeteering our character. We saw huge potential for this kind of technology. What started as a
n experiment quickly proved itself both fun and useful. The project quickly expanded into the current Facial AR Remote and feature set.
我们看到了为电影项⽬构建新的动画⼯具的机会,从⽽为Unity中的实时动画打开了未来。 使⽤AR⼯具进⾏创作还有⼀个“凉爽的因素”,并且有机会继续推动Unity的实时渲染。 ⼀旦我们有了处理从电话到编辑器的数据的基础知识,我们的团队和办公桌周围的每个⼈就⽆法停⽌玩弄伪装我们⾓⾊的乐趣。 我们看到了这种技术的巨⼤潜⼒。 从实验开始的东西很快就证明了⾃⼰的乐趣和实⽤性。 该项⽬Swift扩展到当前的Facial AR Remote和功能集。skyline
The team set out expanding the project with Unity’s goal of democratizing development in mind. We wanted the tools and workflows around AR blend shape animation to be easier to use and more available than what was currently available and traditional methods of motion capture. The Facial Remote let us build out some tooling for iterating on blend shapes within the editor without needing to create a new build just to check mesh changes on the phone. What this means is a user is able to take a capture of an actor’s face and record it in Unity. And that capture can be used as a fixed point to iterate and update the character model or re-target the animation to another character without having to redo capture sessions with your actor. We found this workflow very useful for dialing in expressions on our character and refining the individual blend shapes.
播种希望的种子
团队着眼于Unity的民主化发展⽬标,着⼿扩⼤项⽬。 我们希望围绕AR混合形状动画的⼯具和⼯作流⽐当前可⽤的运动捕捉⽅法更易于使⽤且可⽤性更⾼。 使⽤Facial Remote,我们可以构建⼀些⼯具来在编辑器中迭代混合形状,⽽⽆需创建新版本来仅检查⼿机上的⽹格更改。 这意味着⽤户可以捕获演员的脸并将其记录在Unity中。 ⽽且该捕获可⽤作固定点,以迭代和更新⾓⾊模型或将动画重新定位到另⼀个⾓⾊,⽽⽆需重做与演员的捕获会话。 我们发现此⼯作流程对于在⾓⾊上拨⼊表达式并改进单个混合形状⾮常有⽤。
⾯部AR遥控器的⼯作⽅式 (How the Facial AR Remote works)
The remote is made up of a client phone app, with a stream reader acting as the server in Unity’s editor. The client is a light app that’s able to make use of the latest additions to ARKit and send that data over the network to the Network Stream Source on the Stream Reader GameObject. Using a simple TCP/IP socket and fixed-size byte stream, we send every frame of blendshape, camera and head pose data from the device to the editor. The editor then decodes the stream and to updates the rigged character in real time. To smooth out some jitter due to network latency, the stream reader keeps a tunable buffer of historic frames for when the editor inevitably lags behind the phone. We found this to be a crucial feature for preserving a smooth look on the preview character while staying as close as possible the real actor’s current pose. In poor network conditions, the preview will someti
mes drop frames to catch up, but all data is still recorded with the original timestamps from the device.
遥控器由⼀个客户端电话应⽤程序组成,其中流阅读器充当Unity编辑器中的服务器。 客户端是⼀个轻量级的应⽤程序,能够利⽤ARKit的最新功能并将该数据通过⽹络发送到Stream Reader GameObject上的Network Stream Source。 使⽤简单的TCP / IP套接字和固定⼤⼩的字节流,我们将从设备将混合形状,相机和头部姿势数据的每⼀帧从设备发送到编辑器。 然后,编辑器对流进⾏解码,并实时更新装配的⾓⾊。 为了消除由于⽹络延迟引起的某些抖动,流编辑器会保留可调整的历史帧缓冲,以防⽌编辑器不可避免地落后于⼿机。 我们发现这是⼀个⾄关重要的功能,它可以保持预览⾓⾊的流畅外观,同时尽可能地保持真实演员的当前姿势。 在恶劣的⽹络条件下,预览有时会丢帧以赶上,但所有数据仍会以设备的原始时间戳记录下来。
中医美容学
On the editor side, we use the stream data to drive the character for preview as well as baking animation clips. Since we save the raw stream from the phone to disk, we can continue to play back this data on a character as we refine the blend shapes. And since the save data is just a raw stream from the phone, we can even re-target the motion to different characters. Once you have a stream you’re happy with captured, you can bake the stream to an animation clip on a character. This is great since they can use that clip that you have authored like any other animation in Unity to drive a character in Mecanim, Timeline or any of the other ways animation is used.
民航资源中国网在编辑器端,我们使⽤流数据来驱动⾓⾊进⾏预览以及烘焙动画剪辑。 由于我们将原始流从⼿机保存到磁盘,因此我们可以在优化混合形状时继续在⾓⾊上回放此数据。 由于保存的数据只是⼿机的原始数据流,因此我们甚⾄可以将动作重新定位到不同的字符。 ⼀旦有了对捕获感到满意的流,就可以将流烘焙到⾓⾊上的动画剪辑。 这很棒,因为他们可以像Unity中的任何其他动画⼀样使⽤您创作的剪辑来驱动Mecanim,Timeline或使⽤动画的任何其他⽅式中的⾓⾊。惠尚学
Windup动画演⽰ (The Windup animation demo)
With the Windup rendering tech demo previously completed, the team was able to use those high-quality assets to start our animation exploration. Since we were able to get a baseline up and runnin
g rather quickly, we had a lot of time to iterate on the blend shapes using the tools we were developing. Jitter, smoothing and shape tuning quickly became the major areas of focus for the project. The solves for the jittering were improved by figuring out the connection between frame rate and lag in frame processing as well as removing camera movement from the playback. Removing the ability to move the camera really focused the users on capturing the blend shapes and facilitated us being able to mount the phone in a stand.
通过先前完成的Windup渲染技术演⽰,该团队得以使⽤这些⾼质量的资产来开始我们的动画探索。 由于我们能够快速启动并运⾏基线,因此我们有很多时间可以使⽤我们开发的⼯具迭代混合形状。 抖动,平滑和形状调整Swift成为该项⽬的重点领域。 通过解决帧处理中帧速率和滞后之间的联系以及从回放中消除摄像机移动,改善了抖动的解决⽅案。 取消移动相机的功能确实使⽤户着重于捕获混合形状,并使我们能够将⼿机安装在⽀架上。
Understanding the blend shapes and getting the most out of the blend shape anchors in ARKit is what required the most iteration. It is difficult to understand the minutia of the different shapes from the documentation. So much of the final expression comes from the stylization of the character and how the shapes combine in some expected ways. We found that shapes like the eye/cheek squint shapes and mouth stretch were improved by limiting the influence of the blend shape changes to specific areas of the face. For example, the cheek squint should have little to no effect on the lower eyelid, and the lower eyelid in the squint should have little to no effect on the cheek. It also does not help that we initially missed how the mouthClosed shape was a corrective pose to bring the lips closed with the jawOpen shape at 100%.
了解混合形状并充分利⽤ARKit中的混合形状锚点是需要最多迭代的地⽅。 从⽂档中很难理解不同形状的细节。 最终的表达⽅式很⼤程度上取决于⾓⾊的风格以及形状如何以某些预期⽅式组合。 我们发现,通过限制混合形状变化对⾯部特定区域的影响,可以改善诸如眼睛/脸颊斜视形状和嘴巴伸展的形状。 例如,斜眼斜视对下眼睑⼏乎没有影响,⽽斜眼斜视的下眼睑对脸颊⼏乎没有影响。 这也⽆济于事,我们最初错过了“ mouthClosed形状是⼀种矫正姿势, jawOpen以“ 100%的jawOpen形状使嘴唇闭合。
Using information from the Skinned Mesh Renderer to look at the values that made up my expressio如何保持斗争精神
n on any frame, then under- or over-driving those values really helped to dial in the blend shapes. We were able to quickly over or underdrive the current blend shapes and determine if any blend shapes needed to be modified, and by how much. This helped with one of the hardest things to do, getting the right character to a key pose, like the way we wanted the little girl to smile. This was really helped by being able to see what shapes make up a given pose and in this case, it was the amount mouth stretch right and left worked with the smile to give the final shape. We found it helps to think of the shapes the phone provided as little building blocks, not as some face pose a human could make in isolation.
使⽤来⾃Skinned Mesh Renderer的信息来查看构成我在任何帧上的表情的值,然后⽋驱动或过驱动这些值确实有助于调⼊混合形状。 我们能够快速超过或降低当前的混合形状,并确定是否需要修改任何混合形状,以及需要修改多少。 这帮助完成了最困难的事情之⼀,使正确的⾓⾊扮演关键⾓⾊,就像我们希望⼩⼥孩微笑的⽅式⼀样。 能够看到什么形状构成给定的姿势,这确实有所帮助,在这种情况下,这是嘴向左和向右伸展,并带着微笑产⽣最终形状的过程。 我们发现,将电话提供的形状看作是很⼩的组成部分是有帮助的,⽽不是⼈类可以孤⽴地做出的某些⾯Kong姿势。
At the very end of art production on the demo, we wanted to try an experiment to improve some of the animation on the character. Armed with the collective understanding of the blend shapes from AR
Kit, we tried modifying the base neutral pose of the character. Due to the stylization of the little girl character, there was an idea that the base pose of the character had the eyes too wide and a little too much base smile to the face. This left too little in the delta between eyes wide and base, with too wide a delta between base and closed. The effect of the squint blend shapes also needed to be better accounted for. The squint as it turns out seems to always be at ~60-70% when someone closes their eyes for the people we tested on. The change to the neutral pose paid off, and along with all the other work makes for the expressive and dynamic character you see in the demo.
在演⽰的美术制作结束时,我们想尝试进⾏实验以改善⾓⾊上的某些动画。 有了ARKit对混合形状的集体理解,我们尝试了修改⾓⾊的基本中。 由于⼩⼥孩⾓⾊的风格,有⼀个想法认为⾓⾊的基本姿势会使眼睛睁得太宽,⽽脸上的微笑会显得太多。 这在双眼和底之间的三⾓形中留得太⼩,⽽在双眼和闭合之间的三⾓形中留得太宽。 斜眼混合形状的效果也需要更好地考虑。 事实证明,当有⼈为我们测试的⼈闭上眼睛时,斜视似乎总是在60-70%左右。 中⽴姿势的改变得到了回报,并且与所有其他⼯作⼀起使您在演⽰中看到了富有表现⼒和动态感的⾓⾊。
未来 (The future)
Combining Facial AR Remote and the rest of the tools in Unity, there is no limit to the amazing anima
tions you can create! Soon anyone will be able to puppeteer digital characters, be it kids acting out and recording their favorite characters then sharing with friends and family, game streamers adding extra life to their avatars, or opening up new avenues for professionals and hobbyists to make animated content for broadcast. Get started by downloading Unity 2018 and checking out setup instructions on . The team and the rest of Unity look forward to the artistic and creative uses of Facial AR Remote our users will create.
将Facial AR Remote和Unity中的其他⼯具结合在⼀起,就可以创作出惊⼈的动画了! 很快,任何⼈都可以伪造数字⾓⾊,例如孩⼦们表演并录制⾃⼰喜欢的⾓⾊,然后与朋友和家⼈分享,游戏彩带为他们的化⾝增加额外的⽣活,或者为专业⼈⼠和业余爱好者开辟新的途径来制作动画内容⼴播。 通过下载Unity 2018并在上查看设置说明开始使⽤。 团队和Unity的其他成员都期待我们的⽤户将创造出Facial AR Remote的艺术和创意⽤途。

本文发布于:2023-06-13 15:08:49,感谢您对本站的认可!

本文链接:https://patent.en369.cn/xueshu/71617.html

版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。

标签:形状   混合   动画   编辑器   能够   需要   捕获   数据
留言与评论(共有 0 条评论)
   
验证码:
Copyright ©2019-2022 Comsenz Inc.Powered by © 369专利查询检索平台 豫ICP备2021025688号-20 网站地图