Tutorials: how to use the plugin

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
thank you [Music] breaking news arvr lab has launched a new meta-human SDK plug-in version text to speech audio to lip sync including streaming and chatbot Now work on Unreal Engine 5. my colleague Hadley is going to show each method in action stay tuned it is going to be epic let's set up a project Hadley can you show us how to do it hi my name is Hadley I am a metahuman made in metahuman Creator and animated by lip sync plugin that I am going to present first of all I create a new project for the demonstration by going to the bridge tab I choose a metahuman and add it to the project [Music] foreign then I check if metahuman SDK plugin is on it is on so we are ready to start working foreign [Music] next on the screen is text to speech stay tuned so to make our meta-human speak we need an audio file here I will show a few ways how to generate it first I'll demonstrate a way to get the voice recording from the editor to do so we go to the content browser choose the right folder and by right-clicking select create speech to text bar then I select the TDS engine and the TDS voice engine and finally type text I want my meta human to pronounce and click the create bar after that an audio asset appears in my folder all done in addition you can do it not only from the editor but also from the blueprints let me show you how foreign to do this I open level blueprint go for metahuman SDK subsystem and select the text-to-speech method then we fill in parameters in the input Tab and set the output if the operation is successful we get the audio file from the text foreign it's time we checked click like And subscribe nailed it [Music] moving on to audio to lip sync now that we have the audio file we can generate an animation for it I will show several ways how it can be done first let's get a Face animation in the editor in the content browser find the audio asset you would like to use for animation in my case I will go for the one we got in the previous episode after that by right-clicking on the sound wave I go to the sound wave actions and select create lip sync animation the generate lip sync animation window pops up where we should specify the following parameters the skeleton to which the animation will be attached and the way this animation will be mapped to the character here I use the skeleton of our metahuman and as a mapping mode I select metahuman the rest is on the plugin so that is it let's see what we got foreign now I am going to repeat the result on our metahuman I select my meta-human apply our animation to its face and add the following audio file watch it click like And subscribe button we can also generate animation at runtime before we start I undo previous actions and open level blueprint here I open the metahuman SDK subsystem and select the ATL audio to lipsync method then I need to set the input parameters and the output you can see it on the screen foreign [Music] a few more clicks and let's play the result [Music] and it didn't work because I forgot to select the audio file to generate animation here we go click like And subscribe button nice [Music] Hadley what do you have on audio to lip sync streaming previously I demonstrated the lip sync generation in blueprints as you may have noticed there is a pause between the start of the level and the start of playback to execute the request keep in mind that this pause length correlates with the sound size the bigger size of the audio you use the longer the pause will be in order to bypass this limitation we have developed lip sync streaming technology it allows you to send an audio file and start getting chunks of Animation as they are generated without waiting for the entire file to be processed so let me show you how it works as always we open level blueprint and start working there foreign [Music] [Music] First Step duplicates the audio to lip sync episode we apply the same input parameters foreign we need to make these chunks play sequentially I'll show you a quick way to do this I need to take a buffer pull out the current chunk with index 0. foreign [Music] few clicks and we're ready to play the first chunk as the buffer reports that this data has appeared so I take this data and run it after that we are moving on to the next chunk [Music] let's check if it's ready to do this we take the expected length divide it by the chunk size to get the chunk sequence length in the buffer seal it an incomplete chunk is also a chunk we wait till the end of the first chunk to play the second one [Music] foreign [Music] [Music] foreign [Music] [Music] let's see what we got our application makes a request to get lip sync while processing the request we go to the buffer and make sure that as soon as it accumulates enough data it will let us know after that we start playing the data sequentially starting from the first chunk we switch to the next piece of data and repeat as long as we have data in the buffer if it's clear let's see the result click like And subscribe button [Music] is there any chat bot to use let's find out the plugin also allows you to use the brain of a chatbot I will now demonstrate how to use it when we start the level let's add a chat widget to the screen you can find it in the content of the plugin [Music] [Music] after it's done let's assign on sending a message to the chat then we are going to use the metahuman SDK subsystem and make a chat bot request [Music] if the request is successful let's play the result by using text to speech and audio to lip sync operations how to apply these methods I described in the previous episodes [Music] all done let's have a look good day to you beautiful [Music] what if I want it all Hadley has a solution as you can see from the previous episode it took us a lot of steps to get it done in order to optimize the process let's use the combo request [Music] [Music] [Music] here we have parameters for ATL TDS and chatbot operations you can choose the mode you need so as always we start with setting the input parameters for the demonstration I am going for chat tdsatl [Music] foreign [Music] for my metahuman [Music] in this case there is no need to select a sound file it will be inserted automatically after text to speech generation the same works for the text request as we get it from the chatbot result [Music] as you can see instead of using such a huge structure we can go for this option all done let's check what we got [Music] [Music] [Music] Salam that's hello in Farsi how can I help [Music] amazing job [Music] custom rig is for dessert Hadley has the recipe and now I will show you how our plugin works with the custom rig as an example I am going to use a 3D model which goes with the plugin it is FACS based face rig you can see the list of blend shapes at the right of the screen as meta-human SDK plugin works with FACS rig we can apply it not only to metahumans but also other 3D heads foreign few more steps grab our head and apply it to the play animation all set let's see how it works hi it's really good to hear from you I hope you're doing well brilliant foreign foreign
Info
Channel: MetaHumanSDK
Views: 41,056
Rating: undefined out of 5
Keywords:
Id: xo474w8-4ac
Channel Id: undefined
Length: 19min 15sec (1155 seconds)
Published: Mon Apr 10 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.