Stop Searching,
Start Building.
The Developer-First Hub for Open-Source AI Workflows
Want to build your own Blueprints?
See our guidelines for building a top-notch Blueprint.
Must-haves
Open-source models and tools usage
README, pyproject.toml, and organized folder structure
Demo app (Streamlit or Gradio) or jupyter notebook
Config file for easy customization
CLI support
Nice-to-haves
CPU compatibility for most local setups
Google Colab notebook option
PyPI package availability
Dockerfile for the demo app
Diagram of the Blueprint in the README
Setup and guidance docs using mkdocs
Highlighted Building Blocks
Explore the open-source resources behind our Blueprints.
Tools

Any-Agent
Any-Agent is a Python library designed to provide a single interface to access many different agent frameworks.

LiteLLM
LiteLLM is an open-source library that provides a unified API to call and manage multiple large language model providers using a single OpenAI-compatible interface.

Ultralytics
Ultralytics provides cutting-edge computer vision models, including YOLO11, enabling developers to integrate real-time object detection, segmentation, and classification into AI applications with minimal effort.
Datasets

Common Voice
A multilingual, crowdsourced collection of voice recordings from Mozilla.

Alpaca-gpt4
This dataset contains English Instruction-Following generated by GPT-4 using Alpaca prompts for fine-tuning LLMs.
Models

OuteTTS-0.1-350M
OuteTTS-0.1-350M is a novel text-to-speech synthesis model.

Qwen2.5-3B-Instruct-GGUF
Qwen2.5-3B-Instruct-GGUF is an instruction-tuned model that generates long-form content, and is optimized for efficient deployment via the GGUF format.

Kokoro-82M
Kokoro is an open-weight TTS model with 82 million parameters.