>>32181
Those are all videos. It would be possible with the video2video models - but I'm not sure if my lora would work with that model since it was trained for the text2video model. Also, the lora is capable of generating what's depicted in those videos pretty easily with a simple prompt. Not much going on there, other than belly jiggling, so not too sure what you'd be looking for/expecting out of using those, sorry.
>>32201
As far as I am aware, ComfyUI is currently the only way to locally use Wan Video. I have never used ReForge UI, so my apologies, I have no idea when it comes to that. I use ComfyUI for everything. Once you get a grasp on working with nodes, it becomes very easy and you'll really start to appreciate the customization. If you are just starting out with ComfyUI, do not make the mistake of going to CivitAI and downloading overly complex workflows that promise a bunch of nonsense. When trying out a new model, always stick to the smallest workflow possible, usually by using the workflows provided by the model developers. If you click on my example videos from my lora page on CivitAI, I believe it will let you copy the workflow I have been using.
As for the model downloads:
https://github.com/Wan-Video/Wan2.1
^This is the main Wan Video page where you can find links from the developer.
If you are interested in downloading GGUF versions for low-vram cases, here is the link for those:
https://huggingface.co/city96/Wan2.1-T2V-14B-gguf/tree/main
^If you are using GGUF, you must download the GGUF model loader custom node via the "Comfy Manager" to properly load the files. This City96 dude has GGUF's available for most video models, definitely look over his page.
Keep in mind, the lora was trained with the 14B T2V model. I have not tested it on either the I2V or 1.3B models. It could work, it could not. No idea yet.
Hope this all helps, let me know if you have any questions.