Frequently Asked Questions (FAQ) about Flyworks MCP
Q: What is Flyworks MCP? A: Flyworks MCP (Model Context Protocol) server is a free tool that allows you to create lipsync videos using digital avatars. It integrates with the Flyworks API to animate avatars with audio or text inputs.
Q: Is Flyworks MCP really free? A: Yes, Flyworks MCP offers a free trial with limited daily quota, watermarked videos, and a maximum duration of 45 seconds. For full access, you’ll need to contact Flyworks AI to acquire a token.
Q: What is MCP? A: MCP stands for Model Context Protocol. It’s an open protocol that standardizes how applications provide context to Large Language Models (LLMs), allowing AI models to interact with external data and tools.
Q: What types of avatars can I use with Flyworks MCP? A: Flyworks MCP supports a wide range of digital avatars, including both realistic and cartoon styles.
Q: Can I create a lipsync video from text? A: Yes, Flyworks MCP has text-to-speech functionality, allowing you to create lipsync videos directly from text input.
Q: What are the system requirements for running Flyworks MCP?
A: You need Python 3.8+ and the following dependencies: httpx and mcp[cli].
Q: How do I install Flyworks MCP? A: You can install it via Smithery or manually by cloning the repository and installing the dependencies.
Q: How do I set up the API token?
A: You can set the Flyworks API token as an environment variable (FLYWORKS_API_TOKEN) or in a .env file.
Q: How do I integrate Flyworks MCP with Claude or Cursor? A: The documentation provides detailed instructions on how to configure the MCP server settings in Claude and Cursor.
Q: What is async_mode?
A: async_mode determines whether the tool returns a task ID immediately (true) or waits for the video to complete and downloads it (false).
Q: What is UBOS and how does it relate to Flyworks MCP? A: UBOS is a full-stack AI Agent Development Platform. Flyworks MCP can be integrated with UBOS to create automated content creation workflows, leveraging UBOS’s AI agent orchestration capabilities.
Q: Can I create avatars from videos or images? A: Yes, the tool supports creating avatars from video URLs, image URLs, local video files, and local image files.
Q: What happens to the video files I upload? A: When using local files, the server automatically uploads them to Flyworks servers for processing.
Q: What should I do if I encounter a spawn uvx ENOENT error?
A: Confirm the absolute path of uvx by running which uvx in your terminal and update the configuration with that path.
Q: How long does it take to create a lipsync video? A: The processing time varies. Avatar creation through videos typically takes longer but provides better quality. For quick testing, avatar creation through images is faster.
Q: What are the limitations of the free trial token? A: The free trial token has a limited daily quota, adds a watermark to the generated videos, and restricts the duration to 45 seconds.
Flyworks Lipsync Server
Project Details
- Flyworks-AI/flyworks-mcp
- MIT License
- Last Updated: 5/10/2025
Recomended MCP Servers
Agentic LLM Vulnerability Scanner / AI red teaming kit 🧪
mcp server and langgraph mcp adapeters
MCP server for getting github trending repos & developers
MCP server that gives Claude ability to use OpenAI's GPTs assistants
A blog starter project for a NextJS blog using Obsidian as the CMS
Model Context Protocol (MCP) server for BatchData.io property and address APIs - Real estate data integration for Claude...
This repository contains the source code for a confluence context server, it provides prompts that can be used...
MCP CheatEngine Toolkit - A Python-based toolkit for communicating with CheatEngine through MCP interface
Anki MCP Server to allow LLMs to create and manage Anki decks via Anki Connect





