How to change the rule Analysis Events are generated?

Hi, I've learning to use FaceFX recently and I've noticed that when you drag in an audio file it will automatically generated several curves like Blink, Head Pitch/Roll/Yaw.
Even change the Analysis Actor the curves are still there, I've checked Analysis Actor, looks like its process is already after the events get generated.
I mean, in real life people show different personality when talking, like some have quite obvious behaviors like nodding/shaking their head and strong eyebrow expression.
While some like to keep their head and eyebrow still when talking, and even they do it might be in different pace (comparing to above).
And some might have weird, sensitive head rotating while talking (for example a nerdy psycho).
So , my question is how to change the rule these events are generated for different characteristic in FaceFX.

Permalink

Analysis Actors are how we convert Analysis Events into curves like Blink, Eyebrow Raise, etc. You can check the GestureLib.py file in the Scripts directory for more info on how we create the analysis actors.

You can't customize the Analysis Events themselves, those are hard-coded. But nothing stops you from interpreting those Analysis Events differently and coming up with completely different animation curves. Animating procedurally like this with probability is difficult, and the event system itself can be tricky (watch the old event tutorials) but it is certainly possible to create different modes of talking with analysis actors.

If that doesn't answer your question, can you elaborate on how you would change the analysis events if you could?