2 min Applications

Microsoft expands fine-tuning capabilities in Azure AI Foundry

Microsoft expands fine-tuning capabilities in Azure AI Foundry

Microsoft has announced a significant update for model fine-tuning in Azure AI Foundry. This environment already supported model adjustments, but now gets significant improvements, including support for Reinforcement Fine-Tuning (RFT).

RFT is a new method that uses chain-of-thought reasoning and task-specific evaluation to improve model performance in specific application domains.

OpenAI introduced the alpha program for RFT in December last year. Early testers say RFT delivers a 40% performance improvement over standard models without fine-tuning. Microsoft announced on Neowin that RFT will soon be available for OpenAI’s o4-mini model in Azure AI Foundry.

RFT is particularly recommended when decision-making rules are highly specific to an organization and cannot be easily captured through static prompts or traditional training data. It enables models to learn flexibly and dynamically to deal with rules that reflect the complexity of the real world.

RFT is also suitable for scenarios where internal procedures deviate from common industry standards and success depends on following those unique standards. In such cases, RFT can effectively incorporate procedural variations, such as longer deadlines or modified compliance criteria, into the model’s behavior.

Furthermore, RFT is well suited to domains with complex decision-making, where the outcome depends on multiple sub-cases or the weighing of different variables. It helps models generalize in complex situations and ensures more consistent and accurate decisions.

Fine-tuning option coming soon

In addition, Microsoft announced support for Supervised Fine-Tuning (SFT) of OpenAI’s new GPT-4.1-nano model, suitable for applications where cost control is important. This fine-tuning option will be available within a few days.

Support will also be added for fine-tuning Meta’s latest Llama 4 Scout model with 17 billion parameters. This model supports a context window of 10 million tokens and is available through Azure’s managed computing capacity. The fine-tuned Llama models are available in Azure AI Foundry and also as components within Azure Machine Learning.