Once you have purchased the plugin, you will receive a link to download the zip file. You need to extract the contents of the zip file to a folder on your computer.
To install the plugin, you need to copy the entire folder that matches your Cinema 4D Version and paste it into the plugin folder of your Cinema 4D application. Usually you install the plugin into the preference folder of your Cinema 4D version into the plugin folder or you designate a new one by adding a path in the preferences settings.
There may only one liptalk folder system wide in the plugin folder.
For the newest version 2025 of Cinema 4D you can simply us the 2024 folder. It was tested with version 2025 and is compatible.
Now you can start Cinema 4D.
Since LipTalk is a Tag-Plugin it does not appear in the extension menu, you can find it in the Object Manager under Tags/Extensions/LipTalk Tags
LipTalk consists of two plugins: LipTalk and LipTalk Jaw Solver
LipTalk Tag
Original Size (Right-Click Open in new tab)
Original Size (Right-Click Open in new tab)
Original Size (Right-Click Open in new tab)
Original Size (Right-Click Open in new tab)
Original Size (Right-Click Open in new tab)
LipTalk can work with custom characters or directly with characters exported from DAZ Studio.
Custom Characters:
LipTalk controls the Pose-Morph Tag. How to prepare your custom model for LipTalk read this .
DAZ Characters:
From DAZ Studio you can export all the needed Viseme-Morphs and others to Cinema 4D.
How to export a DAZ Character go to: How to export from DAZ
There is a compatibility issue with the DAZ to Cinema 4D Bridge and Cinema 4D versions prior to 2023. The bridge fails to export the poses and Visemes. If you contact the DAZ support maybe they can help you with an older version of the bridge, which works in combination with Cinema 4D R23.
Original Size (Right-Click Open in new tab)
First create a common Null-Object and add the LipTalk Tag to it. However, you could also attach it to the object that has the PoseMorph Tag. But since we want to prevent the original object from any unwanted changes or damage, it is a cleaner and safer method to use our own Null-Object.
If you already have a prepared Pose Morph Tag with your Visemes you can drag it into the Pose-Morph Tag Link of the plugin
Original Size (Right-Click Open in new tab)
If the Null Object with the LipTalk Tag does not have any User Datas for the Visemes the Button "Add User-Data" is available.
Just click it to add the corresponding User-Datas
This User-Datas are not absolutely necessary. But you will need them later if you want to bake the lip syncs and use them also in combination with Motion-Clips. So why not add it now.
Original Size (Right-Click Open in new tab)
The Add Poses Button just adds empty poses for the Visemes with the proper naming to the Pose Morph Tag to save a little time.
How to prepare a custom character for LipTalk. Read here.
The latest DAZ to Cinema 4D Bridge seems to has a bug with C4D versions older than 2023. It fails to export the poses and the special Null Object Controller. You can attempt to export the character as FBX for older Cinema 4D versions, but the poses will not be exported either. You can try to contact DAZ support and ask for an older running version.
To use custom characters in C4D, you need to follow my tutorial on how to set up Custom Characters.
Original Size (Right-Click Open in new tab)
To animate the visemes or other morphs that you have exported from DAZ Studio, you need to use the special null object called "CharacterName Morph Controller Group". Since DAZ Characters are more complex, they have more Skin-Tags and more Pose-Morph tags. This null object controls all Pose-Morph tag so to say and has user data sliders for each viseme and morph that you have exported.
To animate Daz Characters you need to add the LipTalk tag to this special Null Object since as mentioned above the PoseMorph Sliders are controlled via the UserDatas of this Null-Object and the special Xpresso Expression.
Then search for the Pose-Morph-Tag which is a tag of the Genesis9.Shape. This Pose-Morph Tag is controlling all other Pose-Morph tags of the DAZ Character via a second Xpresso Expression. Drag and Drop it into the LipTalk plugin.
Actually you could also just delete this special Controller-Null-Object since the Pose-Morph Sliders of the other objects are controlled via this one Pose-Morph Tag and the second Xpresso Expression. But later when you maybe want to bake the animation and use Motion-Clips, it just works with this Controller-Object.
There is a minor issue with this Xpresso Expression. It affects the performance and causes lag during playback. So to check the animation you can either render a preview or you uncheck materials in the Option-Menue of your Editor-View
Original Size (Right-Click Open in new tab)
Most of the older DAZ Characters are missing the "EE" Viseme. Newer ones have an additional "Vis" in the names.
Click the "Rename DAZ" Button.
In older characters it renames the "EH" user data and the pose in the Pose Morph tag into "EE".
In newer Genesis characters it deletes the "Vis" in the names and renames automatically the user datas of the visemes . If the pose morph tag is linked to the LipTalk plugin the pose morphs will also be renamed.
This is an essential step to get LipTalk run with DAZ Characters
From now on, all of the following explanations will work for both custom characters and DAZ characters.
Original Size (Right-Click Open in new tab)
Original Size (Right-Click Open in new tab)
Original Size (Right-Click Open in new tab)
This is the bake feature of LipTalk. You need to specify the start and end frames for the animation. Then, press Bake and LipTalk will generate keyframes for the userdatas of the Null-Object. You can only bake if the correct user-datas are present. If you have not yet added user datas, you can add the required user datas using the Add User Data button (2). The correct ones will be added with the correct name. DAZ characters already have some userdatas from the visimes export.
After the baking process is done, you can disable the Realtime checkbox (5). This will make the plugin use the keyframes to control the Pose-Morph sliders.
You can Undo the bake or simply press Delete. Pressing Delete will delete all keyframes on this track.
It is also possible to bake the first transcript for instance and then the second one and so forth.
You can use the LipTalk tag to make your character's lips move according to the User Datas. The User Datas are created when you press the Add User-Data Button in the LipTalk interface. They have the right names and values for the lip sync. If you have baked the lip-sync the User Datas hold the keys and you can choose between Realtime (5) Mode or keyframe or motion-clip mode. Realtime mode animates the lips based on the transcript or the text if in text-mode (6) in realtime, while keyframe or motion-clip mode uses the baked keyframes animations or motion-clips.
If you need to create poses/visemes for your custom character, you can use the Add Poses Button. This button will automatically generate poses with the correct names and assign them to the Pose Morph tag. This just works if you have linked the Pose Morph tag in the Pose-Morph Tag link field.(7). It creates all necessary poses with the correct naming etc and you can directly begin to sculpt.
If you have all the poses seperately as geometry then you can also name each mesh according to the visemes for LipTalk and drag them all together into your Pose-Morph Tag. Cinema 4D will give them the correct name and also links them. Then you don't need to press Add Poses. Or you press Add Poses and drag each geometry manually into the link field in the advanced group of the pose.
Some DAZ Characters older than Genesis8 may export user-datas and morphs with incompatible names like "Vis AA". The Plugin requires "AA" only. You can use the Rename DAZ button to fix this. It will rename the user-datas of the DAZ Character Controller Null Object and the main Pose-Morph tag if linked. (7)
For further explanations about all needed poses and morphs go to the section "Custom Characters".
Realtime mode and keyframe/motion-clip mode are two options that you can choose by toggling the Realtime checkbox. In Realtime mode, the plugin uses the transcript or the text input (if you are in text-mode (6)) to create the animation. In keyframe/motion-clip mode, the plugin uses the keyframes that are stored in the user-datas of the Null-Object. You can also make your own motion-clips from the keyframes and save them for later use. The plugin can also work with motions that you have already created.
The Text Mode checkbox allows you to switch between audio/transcript mode and text mode. In text mode, you can see some extra GUI Elements that let you enter some text.
To control the lip movements of your character, you need to link a Pose Morph Tag that contains the blendshapes for each viseme. To do this, simply drag and drop the Pose Morph Tag from the Object Manager into the Pose-Morph-Tag link.
To use audio-data as input, you need to provide a transcript file that was generated by the LipTalkAudioConverter. If you specified a path to it, you can load it by clicking the "Load Transcript" button (9).
If you click the Load Transcript button it loads the transcript from the path in the Transcript (8) link field. Everytime you want to load or reload a transcript you have to click this button. If you load a scene with LipTalk it automatically loads the transcript. Loading different transcripts by animting this parameter is currently not avaiable. If you want to use different transcripts you have to bake each (1) or use a second LipTalk tag with a different Playback Start and End Range (24).
The Advanced Mode checkbox is only available if you converted the audio file in Advanced Mode using LipTalkAudioConverter. It is only recommended to use Advanced mode if the audio-speech files contain long vowels. These transcripts additionally contain all the information about the amplitudes of the audio file and have a much larger file size. If you have converted in Advanced mode, you can use both modes and toggle between both algorithms in LipTalk. It is an internal algorithm in the LipTalk Plugin called the "Frame-Range-Method" which was developed by 3Deoskill. In Normal Mode, the lips are immediately closed after a phoneme is detected using the Release parameter. In Advanced Mode, the lips remain in this position as long as the internal algorithm still associates the amplitude with the phoneme. If the signal falls below a certain level, the Release parameter kicks in and closes the lips. The Threshold (16) parameter can be used to influence this threshold value, similar to a noise gate. The parameter works dynamically and automatically adjusts to the detected original level. Internally, the audio signal is of course smoothed to avoid irregularities
The Mode menu allows the user to select either the Advanced or the Comic mode. The Advanced mode requires the user to create or export (DAZ) all the necessary visemes for their characters, such as "AA, EE, IY, OW, UW, SH, M, T, F, W". This mode produces the most realistic results. The Comic mode only needs a few viseme poses for cartoon-like characters, such as "AA, EE, OW, M, T". The Custom Characters section explains how to set up these modes.
With the Sync Start value you can determine when your animation should start. Or if your audio is not in sync you can change the value here or in the audio (21)
The Speed value is just available in Text Mode (6). It is a factor. The higher the value the slower the animation. The default value is 1.
The Release parameter is very important and controls how quickly a blendshape returns to zero after being activated by a phoneme. It is similar to the release time in audio effects like hall, compressor etc. For example, when a viseme like "AA" is detected from the transcript, it sets the corresponding blendshape to a certain value. Then, the Release parameter gradually decreases the value to zero over time. The higher the value, the faster the decrease. The Release parameter should be adjusted according to the speed of speech of the characters. If the speech is slow, the Release parameter should be lower, and vice versa. If the Release parameter is too low, it may interfere with another blendshape, such as "EE", because the PoseMorph tag blends both shapes, "AA" and "EE". Therefore, it is important to find the optimal value for the Release parameter. This parameter can be animated. In Advanced Mode(10), this parameter has less effect but is still important.
The Limiter is a feature that controls the audio amplitude range, similar to a brickwall limiter in audio effects and is just available in audio mode. The LipTalkAudioConverter uses the audio amplitude to measure the intensity of a phoneme, which affects the mouth animation. By using a variable percentage for each value, the system creates more realistic and natural animations. For example, a character who speaks louder will have a wider mouth opening than a character who speaks softly. However, some phonemes may have a lower or higher amplitude than others, which can sometimes cause unnatural or inconsistent animations. To avoid this, you can set the Limiter to a value between 0 and 1, which represents the maximum amplitude of the audio. For example, if the lowest phoneme has an amplitude of 0.18 and the highest 0.8, you can set the Limiter to 0.2, which will reduce the amplitude of all phonemes to 0.2 or lower. This will make the animation more smooth and stable. When the Limiter has lower values, it means that it is reducing the signal more. To compensate this, you need to increase the Srength (17) parameter to restore the signal level.
The Threshold parameter is a type of Noise Gate that filters out unwanted sounds. It only works in Advanced Mode (10). It determines the maximum level of sound that is considered as silence. For example, if your character says a long "AA" for one second, LipTalk measures the sound intensity in intervals of 1000 frames. If the sound intensity is lower than the Threshold, it means the "AA" has ended. Then the Release parameter (14) slowly reduces the blendshape.
The Threshold is important because it helps to adjust for different levels of background noise in the audio files. It is very sensitive and may need to be set in the two or even three-digit decimal point range. A 0db signal in Liptalk is at the value 1. The threshold for silence can then be below 0.1. The default value is 0.085
You can adjust the overall Strength or for each visime individually. These parameters are animatable and work in both audio and text mode. The Strength parameter affects all visemes at once. If you want to change the intensity of a specific viseme, you can lower or raise its value or animate it.
Here you can load a link to the audio file corresponding to your transcript. In Cinema 4D, you can insert the sound by clicking the Add Soundtrack button (22). This is an animatable parameter. Cinema 4D will update the sound automatically if you animate it and there is already a soundtrack.
TIP: If the parameter is not animated and you link to another soundfile and the track is already created it will only refresh the sound at frame 0. So here you could remove and re-add the track to avoid going back for refresh.
Here you can mute the soundtrack. This is not animatable.
It shows the name of the soundtrack if the sound is loaded into Cinema 4D.
The Sync Start value lets you control the timing of your audio playback. This value works the same way as the Sync Start (12) for the transcript.
The Add Soundtrack button creates a soundtrack on the Null-Object with the LipTalk tag and shows it in the link field (20)
The Delete button will clear the audio track of the Null-Object
The LipTalk tag has a Playback Start and End option. This lets you control when the tag affects the Pose Morph Tag. You can use this to make different Null-Objects with LipTalk tags for the same Pose Morph Tag. For example, if your character has four dialogues with four audio files, you can make four Null-Objects with LipTalk tags and set the Playback range for each one. This way, they won't interfere with each other. The Playbacks may not overlap, otherwise one LipTalk Tag blocks the other.
You could also create more LipTalk tags on the same Null-Objekt but with
Don't forget to set also the Sync Start(12) and the Start (21) for each tag.
Tip: If you use baked motion-clips, you only need one Null-Object. You can mix all four baked motions non-linear.
If you turn on the Text Mode button the interface elements 25-33 appear. Your character is now no longer controlled by a transcript. You are now able to use text to animate the lips of your character. You can choose the language that suits you best from this menu. Currently, 3Deoskill supports five languages: English, Russian, German, French and Italian. Each language has a large vocabulary of words that you can use to design your models. English has around 160000 words, Russian has about 100000 words, German similar to English. French and Italian each have around 300,000 words.. 3Deoskill is constantly working to improve its dictionaries and add more languages in the future.
You can enter your text in this field. Please avoide special characters like commas, dots etc. Liptalk will automatically remove them.
The "Convert" converts the text into phonemes. but he asks you beforehand so that you don't accidentally overwrite the phoneme parameters you created
The "External" button opens an external text editor, so that you can edit the text more comfortably. It opens the standard editor of your system and shows the converted phonemes. Therefore it saves first the content of the phoneme text field a file in your plugin folder and loads it into the editor. Now you can add your parameters and save the file again.
The "Paste" button adds the phonemes from the special file which was created with the "External" button and pastes it into the phonem panel. Only the latest file status is inserted. If you forget to save the file, the old version will be inserted. Please be careful - this button carries out the action immediately and does not ask you beforehand
The "Word Spacing" parameter controls the pause between the words. Actually we tend to connect words fluently , nearly without any pause. But since this is hard to synchronize later with an voice-over, you can set the pause according to your taste by increasing this value. It is in milliseconds and increases the default value which is 100 ms.
This panel displays the converted phonemes and inserts a blank space before the first word. This is necessary for the internal algorithm to synchronize the lip movements.
But this field can do more than just display the phonemes. This field is also used to enter special parameters to controll each of your phonemes. You can read about that down below in the topic : Usage of the text converter.
The Parameter Error label shows an error message if you made some mistakes in the syntax for the parameters, you added to the phonemes. How to add prameters and the parameter syntax is described below.
In this group you can find the Manual Recording settings. Here you can turn on the manual recorder, which enables you to block the automatic workflow to set manuel keyframes e.g. if you want animate specific noise which was not dedected correctly from the AudioConverter for instance laughters, click noises etc.
Manual ON lets you record manually and stops the automatic workflow. You can animate this parameter. It only works in the playback range you set in Playback Start/End (24)
You can directly manipulate the Pose Morph Tag by adjusting the strength parameters of each viseme. If you would animate the visemes in the Pose Morph tag, LipTalk will overwrite them in automatic mode. Moreover, you will lose the ability to use motions later, which require the keyframes of the User Data of the Null Object.
TIP: To transfer your lip movements to a game engine, you can bake the Pose Morph tag in the dope sheet.
The Record Sliders button creates a keyframe for all visemes at the current frame.
The Reset Sliders button resets all visemes to zero
In this group you will find the settings after baking. This section allows you to control the strengths, even if you have already baked the lips into keyframes or motions. If you want to reuse motions from a library for another character but only need minor strength adjustments, these values give you the opportunity to do so.
This checkbox turns the After Baking on
The strength values. Default value is 1. It is a factor. The baked keyframes will be multiplied by these values. They are all animatable.
With the Record Sliders button you can set keyframes for all strength values at the current frame.
With the Reset Sliders button you simply set all strength values to it's default value.
First type in a text. Do not use special characters such as commas or dots etc. LipTalk will automatically remove them.
Press the Convert button to convert the phrase into phonemes.
The phonemes will appear in the Phonemes text field
Example:
Text input:
Hello welcome to the lip talk plugin
Phoneme output:
hɛləʊ wɛlkəm tu ði lɪp tɔk plugin
As you can see it converted all words into phonemes except of the word "plugin". This means the word is not in the dictionary.
(You can tell because the converted word is the same as the one entered. At least, most of the time. It would work that way as well, because Liptalk can also read common letters of the alphabet, but sometimes it doesn't lead to the desired result.)
You can use a special syntax to fix this.
The word plugin is made of two words, plug and in. So simply try to connect them via a underscore '_'.
plug_in
Press again the Convert button
Text input:
Hello welcome to the lip talk plug_in
Phoneme output:
hɛləʊ wɛlkəm tu ði lɪp tɔk plʌɡɪn
Now it worked. This method you can use for a lot of cases. You could also write it in two words but then LipTalk makes a small break in between each word.
There are a lot of words which are not registered in the dicts file.
You can extend or modify the file. Open the dict file:
Yourplugin/res/dicts/english_converted.json
Search for the word and insert the new version and save the file.
3deoskill is trying to support more languages in the future for the text-mode.
------------------------------------------------------------------------------
The Phonemes (27) text panel can accept both phonetic symbols and regular letters (a,b,c,d,e,f,g...). This means if LipTalk lacks a specific dictionary you can try to type in common letters into the Phoneme Output Area. However, you may have to do a little trickery here and there. But this is just a workaround. 3deoskill will provide more languages in the future for the Text Mode.
Text input:
None (You don't have to type in text here, since you won't convert, type directly into the phonemes panel)
Phoneme output (type in):
Halo wi geht es dir
But the Phonemes text panel can do even more.
You can use a special syntax to control the strength, length for each phoneme and also specify a pause.
t = time ( a value in seconds, it holds the phoneme and will be added to the default time of the phoneme (0.1))
s = strength ( a factor, default is 1)
p = pause (used mostly after a word to enforce a break, a value in seconds which is added to the phonemes length and the small pause after a phoneme or/and word)
Parameter Syntax Example:
(t:0.5,s:1.22,p:1.0)
Parameters must be enclosed in parentheses and follow directly after the phoneme without any space. You can also use just a single parameter. However, if you add more, you must separate them with a comma without a space.
! A Parameter Error (28) indicates that the syntax of the parameters is
incorrect. The message will display the error details and the correct syntax. The algorithm will use the default values for the parameters until they are fixed.
------------------------------------------------------------------------------
Example:
Phoneme output:
hɛləʊ wɛlkəm tu ði lɪp tɔk plʌɡɪn
Modify the Phoneme output:
hɛləʊ(t:.3,p:.5) wɛlkəm tu ði lɪ(s:1.2)p tɔk plʌɡɪn
The internal algorithm holds the ʊ in the word hello 0.3 seconds longer and then makes a break of 0.5 seconds. In the word liptalk it gives the ɪ more strength to emphasize it a bit.
!Please be careful when using the Convert (26) button. This function will, if you confirm, erase all the parameters, that you have entered or modified.
You can adjust the Speed (13) value in text mode to control the pace of the animation.
!
Text mode is a useful feature when you don't have any audio files and plan to add a voice-over later. Text mode requires some practice to master it but gives you the most accurate LipSyncs.
The LipTalk Jaw Solver Tag is a tag which is intended to animate the jaw joint of a rigged character or a object e.g. a controller which controls the rotation of a jaw-joint. It works by linking the Pose Morph tag which has all the sliders for the different visemes and which is controlled by the LipTalk-Plugin. For the Solver Tag to work the Pose Morph tag does not necessarily need any actual data, but the sliders. You can create them easily by adding a LipTalk Tag and clicking the Add Poses button. The LipTalk Tag lets you load a transcript or type in a text that will sync with the mouth animation. The LipTalk Tag controls the Pose Morph tag, and the Pose Morph tag controls the jaw movement for the Jaw Solver tag. If the Pose Morph Tag has no datas (Poses) but just the sliders. The character is just opening and closing the mouth.
But the most realism you will achieve by adding some data to the Pose Morph tag for each viseme as it would be the common way. So herefore the Pose Morph tag has to be added to your characters geometry. This way, you only need to adjust the lips slightly, since the jaw joint is already opening and closing the mouth. Sometimes the Pose Morphs should be set to work "after" Deformers.
The LiptTalk tag should be deactivated if there is a transcript loaded but the LipTalk Joint Solver tag can be activated, so that you can sculpt the lips while it is opening the mouth when a certain slider of the pose morph is active.
For instance, for the Viseme "AA", you don't need to sculpt anything on your lips, because the jaw joint does it for you by already opening the mouth. For the Viseme "EE", you just need to pull the lips a bit sideways because the jaw joint opens the mouth for you later. To edit the pose here just go to your jaw joint and rotate it to the angle you need and then sculpt the lips a bit for the "EE" viseme. This method allows you to create each pose quickly and easily.
This will be all explained in the Full Video Tutorial, in the LipTalk in Production Workshop or in the Custom Character section
Basically it is the same worklfow as it is without the Jaw Joint Solver tag
Add the Jaw Joint Solver tag to the jaw joint of your characer
Create a PoseMorph Tag on the characters geometry
Add the sliders for the visemes (they can be empty if you just want to move the jaw joint)
Load a transcript into the LipTalk plugin or type in text and link the Pose Morph tag to it.
Drag the Pose Morph tag also into the Jaw Joint Solver tag
The joint will be now animated.
(optional) If you want also the lips to move you have to modify the pose morph poses by adding data to them by sculpting
Original Size (Right-Click Open in new tab)
Here you can drag the Pose Morph tag which is controlled via the LipTalk tag.
The Rotation-Axis sets the direction of the rotation of the joint.
The Strength parameter controls the overall strength of the rotation.
This vector shows the "Initialized" angle.
This "Initialize Angle" button freezes the actual position from that the Jaw Solver Tag starts to work.
The Jaw Solver Tag does nothing until it is not initialized. It is so to say the default position.
The "Free" Button releases the joint/object to the state before initialization
Here you can adjust the strength of the rotation for each phoneme. The strength values in the LipTalk tag do also affect this but they also affect the strengths of the poses in the Pose Morph Tag.
The Invert checkbox inverts the rotation of the joint/object