Sypha AI Docs
Providers

Hugging Face

Find out how to set up Hugging Face Inference Providers within Sypha. Get free inference access to a variety of powerful open-source models directly through the Hugging Face ecosystem.

Hugging Face offers free or low-cost inference for many premier open-source models via its Inference Providers ecosystem. Because many of these models have a generous free tier, this provider is fantastic for early prototyping, experimentation, and budget-friendly coding.

Website: https://huggingface.co/

Getting an API Key

  1. Sign Up or Log In: Navigate to Hugging Face and log in to your account.
  2. Go to Settings: Open your profile options and find the Access Tokens page.
  3. Create a New Token: Issue a new access token, ensuring it has the correct permissions for inference.
  4. Save the Token: Copy the token text right away and keep it secure.

Supported Models

Sypha works with these top Hugging Face models (which are currently free to use):

  • moonshotai/Kimi-K2-Instruct (Default) - Top-tier reasoning model featuring a 131K context window, great for logic and coding tasks
  • openai/gpt-oss-120b - Powerful 120B open-weight reasoning model capable of handling extreme complexity (131K context)
  • openai/gpt-oss-20b - Fast 20B open-weight model that balances capabilities and accessibility perfectly (131K context)
  • deepseek-ai/DeepSeek-V3-0324 - Highly capable reasoning model (64K context)
  • deepseek-ai/DeepSeek-R1 - DeepSeek's logic-focused model featuring step-by-step logic (64K context)
  • deepseek-ai/DeepSeek-R1-0528 - The most recent version of DeepSeek's reasoning model (64K context)
  • meta-llama/Llama-3.1-8B-Instruct - Fast and highly capable 8B Llama model suited for general queries (128K context)

Configuration in Sypha

  1. Open Sypha Settings: Click the gear icon (⚙️) to open settings in the Sypha panel.
  2. Choose Provider: Find and select "Hugging Face" from the "API Provider" dropdown menu.
  3. Enter Token: Paste the Hugging Face access token you created into the required field.
  4. Select Your Model: Pick the model you want to experiment with from the "Model" dropdown list.

Tips and Notes

  • Free Inference: The list of models above is currently running without cost through Hugging Face Inference Providers.
  • Open Source: Every model is completely open-source, giving you the freedom to move to self-hosted versions whenever you want.
  • Rate Limits: Be aware that the free tiers come with strict rate limits. You can find the exact details in the Hugging Face documentation.
  • Model Availability: Server capacity at the inference providers dictates if a model is currently online or delayed, so availability can fluctuate.

On this page