Focus

Decision

Rationale

Alternatives Considered

Trade-offs

Focus

Decision

Rationale

Alternatives Considered

Trade-offs

Federated AI Framework

Used Flower for federated fine-tuning.

Developed by Flower.ai team, with the required expertise to create the Blueprint.

Fine-tune the model locally.

Will require one to gather huge amounts of data, pay for licenses, which will be a costly solution, and performance decreases.

Base Model

Fine-tuned Qwen2-0.5B-Instruct.

Smaller size to make federated fine-tuning more accessible for initial experimentation.

Larger/even smaller models from different models series (Qwen, Llama, etc.).

Larger models require more compute; even smaller models may lose expressiveness.

Dataset

Used Alpaca-GPT4 for fine-tuning.

Well-structured dataset for instruction tuning (not too large).

Custom datasets.

Alpaca-GPT4 may not cover edge cases for specific use-case.

Simulation vs. Deployment

Default simulation mode for federated training.

Easier for developers to test without extensive infra setup.

Direct deployment with Flower’s Deployment Engine.

Simulations may not capture all real-world constraints.

Training Hardware

Supports CPU and GPU fine-tuning.

Increases accessibility for users with limited compute resources.

GPU-only training for efficiency.

CPU training is significantly slower, especially for larger models.

Demo and Evaluation

Provided both Streamlit app and CLI-based evaluation for interactive testing.

Simple way to validate model responses in real time, depending on user-preference.

Real-world deployment across the globe. It's feasible.

Would need to find partners to set this up or rent more instances across the world.