Modify phoneme events to be less hyphenated

Hello
I'm having some problems when use FaceFX parsing Chinese Mandarin: I can achieve good results with the default phoneme mapping, but often the character can't shut up steadily, or the shut up is very slow and soft. However, in my case, (1)many words and phrases should close their lips more simply and quickly after pronouncing them.(2)And the pose mixture of words should be clearer and more explicit, now it is a bit too smooth and relax.
I'm trying to define my own parser and phoneme events using a method similar to the Cartoon Analysis Actor, but when I'm using a scaled decay blend that makes the mouth move very fast, and a constant decay blend that makes the deformation very small (because the duration of the phoneme is very short.) What is the correct setting that I should make so that my custom phoneme event produces a transition blend similar to the default Analysis Actor? (I find that the default transition maintains sufficient amplitude even when the phoneme is short, and is consistent with a constant duration decay mix when the phoneme is longer.)
I found that the blending curve in the included Rogers file is a bit crisper than the default one in terms of shutting up the effect. How is this done? (There must be some effect added to it that I'm not sure of, because when I click on the reanalyze, it produces a different effect than the file's own curve. Even when I manually move the length of the phoneme a little, the default curve is replaced by the default curve. The default curve will have the effect that the transition is not crisp.)
Looking forward to some enlightening responses. This question is very confusing to me.

I'm not sure I understand everything that is going on, but I do think some of this can be explained by the Tools->Application Options->Use new coarticulation setting. By default, this setting is on, and as long as your character has a mapping that supports it (with some tongue-only targets), the new coarticulation algorithm attempts to create a smoother animation with less jaw movement. Some phonemes (T, TH, K, G, etc) aren't too picky about the mouth shape but they do require the tongue to be in a particular position. The new coarticulation algorithm skips the influence these targets have on the mouth while preserving their contribution to tongue movement. The goal is to get a less "flappy jaw."

This setting could explain why curves change when you move a phoneme boundary on Roger's content. This setting is applied when you analyze new audio files, or when you modify the phoneme boundary of an existing animation. Changing the setting won't automatically update all of the animations in the actor, you need to move a phoneme boundary or reanalyze them to see the effect.

The legacy coarticulation algorithm is a bit more simple to understand because it just does what the mapping table tells it to do. The new coarticulation algorithm makes some changes behind the scenes and so can be confusing to work with when you are defining a new mapping.

If you want the mouth to be a bit more active, you might prefer turning off the "Use new coarticulation" option. This can be done at the application level, but you can also specify it on a per-folder or per-flie basis using .fxanalysis-config files

As far as using analysis actors to perform coarticulation, it will be very difficult to improve results with this method. We include the cartoon coarticulation as a simple example, and the massaro cohen algorithm is well-known in academia and conveniently can be implemented by events, but in general, a good coarticulation algorithm needs to have the phonemes influence each other and the events are by-definition independent of each other. It is possible to make curves interact in the Face Graph, but it is very difficult to implement a coarticulation algorithm this way. If you really must write your own, you might as well use python so you have all options at your disposal.

Some final thoughts:

  • If you are getting poor results on certain Chinese characters, it is possible that you can make improvements to the Analysis Languages/ChineseMandarin.dict file. In particular, all possible pronunciations for a character are normally listed, even very obscure pronunciations, which can lead to the wrong pronunciation being picked in some cases. Removing the rare pronunciation can improve results.
  • I'm not sure what you mean by "shutting up the effect" and "crisp", but you can influence how long it takes for FaceFX to ramp into and out of silence. Shortening these variables could make the animation more "crisp" around silences at least. See the ca_leadin and ca_leadout console variables for more info.

Thank you for your reply, I will test it in a few days, I am sorry that my English is not good, so I use translation software, which makes me not describe the problem accurately.