Crawler Report for unsloth.ai

Summary

Website Quality Score

7.5 Good
Performance
9.2
SEO
7.5
Security
8.5
Accessibility
5.0
Best Practices
6.7
  • ⛔ Skipped URLs - 811 skipped URLs found.
  • ⛔ Redirects - 28 redirects found.
  • ⚠️ 575 page(s) do not support Brotli compression.
  • ⚠️ No WebP image found on the website.
  • ⚠️ No AVIF image found on the website.
  • ⚠️ 530 page(s) with skipped heading levels.
  • ⚠️ 15 page(s) with deep DOM (> 30 levels).
  • ⚠️ 395 page(s) without image alt attributes.
  • ⚠️ 575 page(s) without form labels.
  • ⚠️ 575 page(s) without aria labels.
  • ⚠️ 575 page(s) without role attributes.
  • ⚠️ Security - 1725 pages(s) with warning(s).
  • ⏩ Loaded robots.txt for domain 'unsloth.ai': status code 301, size 123 B and took 135 ms.
  • ⏩ External URLs - 811 external URL(s) found.
  • ⏩ Performance NOTICE - 1 slow non-media URL(s) found (slower than 3 seconds).
  • ✅ 404 OK - all pages exists, no non-existent pages found.
  • ✅ SSL/TLS certificate is valid until Jun 1 20:40:09 2026 GMT. Issued by C = US, O = Google Trust Services, CN = WE1. Subject is CN = unsloth.ai.
  • ✅ SSL/TLS certificate issued by 'C = US, O = Google Trust Services, CN = WE1'.
  • ✅ HTTP headers - found 24 unique headers.
  • ✅ All 559 unique title(s) are within the allowed 10% duplicity. Highest duplicity title has 0%.
  • ✅ All 521 description(s) are within the allowed 10% duplicity. Highest duplicity description has 9%.
  • ✅ All pages have quoted attributes.
  • ✅ All pages have inline SVGs smaller than 5120 bytes.
  • ✅ All pages have inline SVGs with less than 5 duplicates.
  • ✅ All pages have valid or none inline SVGs.
  • ✅ All pages without multiple <h1> headings.
  • ✅ All pages have <h1> heading.
  • ✅ All pages have clickable (interactive) phone numbers.
  • ✅ All pages have valid HTML.
  • ✅ All pages have lang attribute.
  • ✅ DNS IPv4 OK: domain unsloth.ai resolved to 172.67.69.105, 104.26.9.42, 104.26.8.42 (DNS server: 127.0.0.53).
  • ✅ DNS IPv6 OK: domain unsloth.ai resolved to 2606:4700:20::681a:82a, 2606:4700:20::ac43:4569, 2606:4700:20::681a:92a (DNS server: 127.0.0.53).

Visited URLs

Found 603 row(s).
URLStatusTypeTime (s)SizeCache
/docs200 HTML218 ms736 kB0 s
/docs/new/studio/chat200 HTML111 ms598 kB0 s
/docs/get-started/unsloth-model-catalog200 HTML155 ms4 MB0 s
/docs/get-started/install/windows-installation200 HTML270 ms983 kB0 s
/docs/basics/finetuning-from-last-checkpoint200 HTML95 ms 510 kB0 s
/docs/new/faster-moe200 HTML155 ms1 MB0 s
/docs/basics/dgx-station200 HTML111 ms627 kB0 s
/docs/models/gpt-oss-how-to-run-and-fine-tune200 HTML131 ms1 MB0 s
/docs/models/tutorials200 HTML131 ms1 MB0 s
/docs/get-started/install/pip-install200 HTML125 ms601 kB0 s
/docs/get-started/reinforcement-learning-rl-guide/fp8-reinforcement-learning200 HTML102 ms878 kB0 s
/docs/basics/inference-and-deployment200 HTML98 ms 481 kB0 s
/docs/get-started/unsloth-notebooks200 HTML129 ms1 MB0 s
/docs/blog/3x-faster-training-packing200 HTML118 ms713 kB0 s
/docs/blog/500k-context-length-fine-tuning200 HTML158 ms557 kB0 s
/docs/new/studio/export200 HTML111 ms535 kB0 s
/docs/models/minimax-m25200 HTML103 ms889 kB0 s
/docs/get-started/install200 HTML370 ms534 kB0 s
/docs/models/qwen3-coder-next200 HTML282 ms1016 kB0 s
/docs/basics/multi-gpu-training-with-unsloth200 HTML95 ms 468 kB0 s
/docs/fr200 HTML263 ms746 kB0 s
/docs/basics/codex200 HTML104 ms686 kB0 s
/docs/new/embedding-finetuning200 HTML110 ms584 kB0 s
/docs/basics/text-to-speech-tts-fine-tuning200 HTML107 ms656 kB0 s
/docs/new/studio200 HTML100 ms838 kB0 s
/docs/models/glm-5200 HTML103 ms989 kB0 s
/docs/basics/claude-code200 HTML141 ms978 kB0 s
/docs/blog/fine-tuning-llms-with-nvidia-dgx-spark-and-unsloth200 HTML106 ms589 kB0 s
/docs/blog/fine-tuning-llms-with-blackwell-rtx-50-series-and-unsloth200 HTML109 ms646 kB0 s
/docs/get-started/reinforcement-learning-rl-guide200 HTML287 ms1 MB0 s
/docs/get-started307 Redirect281 ms75 B0 s
/docs/blog/quantization-aware-training-qat200 HTML183 ms582 kB0 s
/docs/get-started/fine-tuning-for-beginners/unsloth-requirements200 HTML93 ms 658 kB0 s
/docs/blog/how-to-fine-tune-llms-with-unsloth-and-docker200 HTML99 ms 667 kB0 s
/docs/basics/unsloth-benchmarks200 HTML112 ms514 kB0 s
/docs/get-started/fine-tuning-for-beginners200 HTML94 ms 480 kB0 s
/docs/basics/tool-calling-guide-for-local-llms200 HTML109 ms937 kB0 s
/docs/basics/unsloth-environment-flags200 HTML400 ms473 kB0 s
/docs/basics/troubleshooting-and-faqs200 HTML300 ms672 kB0 s
/docs/basics/vision-fine-tuning200 HTML142 ms684 kB0 s
/docs/basics/continued-pretraining200 HTML202 ms497 kB0 s
/docs/models/qwen3-how-to-run-and-fine-tune200 HTML156 ms814 kB0 s
/docs/basics/chat-templates200 HTML100 ms655 kB0 s
/docs/new/studio/data-recipe200 HTML269 ms688 kB0 s
/docs/models/nemotron-3200 HTML125 ms977 kB0 s
/docs/get-started/fine-tuning-for-beginners/faq-+-is-fine-tuning-right-for-me200 HTML174 ms545 kB0 s
/docs/zh200 HTML150 ms743 kB0 s
/docs/get-started/fine-tuning-llms-guide200 HTML120 ms744 kB0 s
/docs/jp200 HTML250 ms747 kB0 s
/docs/models/tutorials/devstral-how-to-run-and-fine-tune200 HTML116 ms671 kB0 s
/docs/de200 HTML135 ms744 kB0 s
/docs/zh/xin-zeng/studio/chat200 HTML297 ms603 kB0 s
/docs/new/studio/start200 HTML157 ms1 MB0 s
/docs/jp/xin-ji-neng/studio/chat200 HTML321 ms607 kB0 s
/docs/new/studio/install200 HTML85 ms 705 kB0 s
/docs/fr/nouveau/studio/chat200 HTML347 ms607 kB0 s
/docs/new307 Redirect99 ms 97 B0 s
/docs/models/tutorials/qwen3-next200 HTML99 ms 678 kB0 s
/docs/de/neu/studio/chat200 HTML319 ms604 kB0 s
/docs/models/tutorials/ministral-3200 HTML98 ms 745 kB0 s
/docs/models/tutorials/devstral-2200 HTML129 ms822 kB0 s
/docs/jp/meru/unsloth-model-catalog200 HTML507 ms4 MB0 s
/docs/308 Redirect213 ms111 BNone
/docs/fr/commencer/unsloth-model-catalog200 HTML689 ms4 MB0 s
/docs/models/qwen3-how-to-run-and-fine-tune/qwen3-vl-how-to-run-and-fine-tune200 HTML217 ms922 kB0 s
/docs/models/tutorials/qwen-image-2512200 HTML108 ms722 kB0 s
/docs/models/nemotron-3/nemotron-3-super200 HTML106 ms653 kB0 s
/docs/de/los-gehts/unsloth-model-catalog200 HTML558 ms4 MB0 s
/docs/models/tutorials/qwen3-coder-how-to-run-locally200 HTML214 ms928 kB0 s
/docs/zh/kai-shi-shi-yong/unsloth-model-catalog200 HTML654 ms4 MB0 s
/docs/get-started/install/mac200 HTML288 ms490 kB0 s
/docs/get-started/install/updating200 HTML92 ms 476 kB0 s
/docs/get-started/install/docker200 HTML123 ms708 kB0 s
/docs/get-started/install/intel200 HTML105 ms725 kB0 s
/docs/jp/meru/install/windows-installation200 HTML304 ms965 kB0 s
/docs/de/los-gehts/install/windows-installation200 HTML138 ms996 kB0 s
/docs/fr/commencer/install/windows-installation200 HTML352 ms1011 kB0 s
/docs/zh/kai-shi-shi-yong/install/windows-installation200 HTML453 ms951 kB0 s
/docs/get-started/install/amd200 HTML111 ms661 kB0 s
/docs/basics307 Redirect103 ms139 B0 s
/docs/de/grundlagen/finetuning-from-last-checkpoint200 HTML323 ms516 kB0 s
/docs/fr/notions-de-base/finetuning-from-last-checkpoint200 HTML483 ms517 kB0 s
/docs/jp/ji-ben/finetuning-from-last-checkpoint200 HTML327 ms517 kB0 s
/docs/models/gpt-oss-how-to-run-and-fine-tune/long-context-gpt-oss-training200 HTML106 ms855 kB0 s
/docs/zh/ji-chu-zhi-shi/finetuning-from-last-checkpoint200 HTML271 ms513 kB0 s
/docs/jp/xin-ji-neng/faster-moe200 HTML126 ms1 MB0 s
/docs/zh/xin-zeng/faster-moe200 HTML125 ms1 MB0 s
/docs/models/tutorials/deepseek-r1-0528-how-to-run-locally200 HTML307 ms809 kB0 s
/docs/de/neu/faster-moe200 HTML387 ms1 MB0 s
/docs/fr/notions-de-base/dgx-station200 HTML88 ms 635 kB0 s
/docs/fr/nouveau/faster-moe200 HTML364 ms1 MB0 s
/docs/jp/ji-ben/dgx-station200 HTML325 ms637 kB0 s
/docs/de/grundlagen/dgx-station200 HTML341 ms634 kB0 s
/docs/fr/modeles/gpt-oss-how-to-run-and-fine-tune200 HTML300 ms1 MB0 s
/docs/zh/ji-chu-zhi-shi/dgx-station200 HTML488 ms632 kB0 s
/docs/models/gpt-oss-how-to-run-and-fine-tune/gpt-oss-reinforcement-learning200 HTML186 ms679 kB0 s
/docs/models/gpt-oss-how-to-run-and-fine-tune/tutorial-how-to-fine-tune-gpt-oss200 HTML114 ms1 MB0 s
/docs/basics/troubleshooting-and-faqs/hugging-face-hub-xet-debugging200 HTML98 ms 465 kB0 s
/docs/models307 Redirect117 ms105 B0 s
/docs/zh/mo-xing/gpt-oss-how-to-run-and-fine-tune200 HTML138 ms1 MB0 s
/docs/get-started/fine-tuning-llms-guide/datasets-guide200 HTML125 ms960 kB0 s
/docs/de/modelle/gpt-oss-how-to-run-and-fine-tune200 HTML146 ms1 MB0 s
/docs/models/tutorials/deepseek-ocr-how-to-run-and-fine-tune200 HTML114 ms619 kB0 s
/docs/jp/moderu/gpt-oss-how-to-run-and-fine-tune200 HTML292 ms1 MB0 s
/docs/get-started/reinforcement-learning-rl-guide/tuto…reasoning-model-with-grpo200 HTML189 ms797 kB0 s
/docs/models/tutorials/deepseek-r1-how-to-run-locally200 HTML125 ms810 kB0 s
/docs/zh/mo-xing/tutorials200 HTML456 ms1 MB0 s
/docs/de/modelle/tutorials200 HTML441 ms1 MB0 s
/docs/get-started/reinforcement-learning-rl-guide/visi…forcement-learning-vlm-rl200 HTML89 ms 685 kB0 s
/docs/models/tutorials/how-to-run-llms-with-docker200 HTML306 ms704 kB0 s
/docs/models/tutorials/qwq-32b-how-to-run-effectively200 HTML108 ms922 kB0 s
/docs/get-started/fine-tuning-llms-guide/tutorial-how-…llama-3-and-use-in-ollama200 HTML95 ms 1 MB0 s
/docs/models/tutorials/phi-4-reasoning-how-to-run-and-fine-tune200 HTML121 ms600 kB0 s
/docs/models/tutorials/deepseek-v3-0324-how-to-run-locally200 HTML211 ms1 MB0 s
/docs/fr/modeles/tutorials200 HTML556 ms1 MB0 s
/docs/models/tutorials/kimi-k2-thinking-how-to-run-locally200 HTML98 ms 915 kB0 s
/docs/jp/moderu/tutorials200 HTML349 ms1 MB0 s
/docs/models/tutorials/grok-2200 HTML110 ms674 kB0 s
/docs/models/qwen3-how-to-run-and-fine-tune/qwen3-2507200 HTML106 ms903 kB0 s
/docs/models/tutorials/magistral-how-to-run-and-fine-tune200 HTML138 ms1 MB0 s
/docs/models/tutorials/gemma-3-how-to-run-and-fine-tun…-how-to-run-and-fine-tune200 HTML171 ms845 kB0 s
/docs/models/tutorials/deepseek-ocr-2200 HTML101 ms672 kB0 s
/docs/models/tutorials/functiongemma200 HTML138 ms838 kB0 s
/docs/models/tutorials/llama-4-how-to-run-and-fine-tune200 HTML97 ms 720 kB0 s
/docs/models/tutorials/gemma-3-how-to-run-and-fine-tune200 HTML120 ms760 kB0 s
/docs/get-started/install/conda-install200 HTML87 ms 467 kB0 s
/docs/de/los-gehts/install/pip-install200 HTML269 ms607 kB0 s
/docs/zh/kai-shi-shi-yong/install/pip-install200 HTML327 ms607 kB0 s
/docs/fr/commencer/install/pip-install200 HTML322 ms607 kB0 s
/docs/get-started/reinforcement-learning-rl-guide/advanced-rl-documentation200 HTML98 ms 805 kB0 s
/docs/jp/meru/install/pip-install200 HTML457 ms609 kB0 s
/docs/get-started/reinforcement-learning-rl-guide/adva…po-reinforcement-learning200 HTML97 ms 500 kB0 s
/docs/get-started/reinforcement-learning-rl-guide/preference-dpo-orpo-and-kto200 HTML232 ms624 kB0 s
/docs/get-started/reinforcement-learning-rl-guide/grpo-long-context200 HTML140 ms745 kB0 s
/docs/zh/kai-shi-shi-yong/reinforcement-learning-rl-gu…p8-reinforcement-learning200 HTML368 ms884 kB0 s
/docs/fr/commencer/reinforcement-learning-rl-guide/fp8-reinforcement-learning200 HTML350 ms890 kB0 s
/docs/de/los-gehts/reinforcement-learning-rl-guide/fp8-reinforcement-learning200 HTML346 ms887 kB0 s
/docs/get-started/reinforcement-learning-rl-guide/memory-efficient-rl200 HTML98 ms 896 kB0 s
/docs/basics/inference-and-deployment/sglang-guide200 HTML103 ms746 kB0 s
/docs/basics/inference-and-deployment/vllm-guide200 HTML113 ms546 kB0 s
/docs/jp/meru/reinforcement-learning-rl-guide/fp8-reinforcement-learning200 HTML382 ms891 kB0 s
/docs/basics/inference-and-deployment/troubleshooting-inference200 HTML107 ms489 kB0 s
/docs/basics/inference-and-deployment/llama-server-and-openai-endpoint200 HTML104 ms577 kB0 s
/docs/basics/inference-and-deployment/saving-to-gguf200 HTML110 ms697 kB0 s
/docs/basics/inference-and-deployment/deploy-llms-phone200 HTML279 ms1 MB0 s
/docs/basics/inference-and-deployment/saving-to-ollama200 HTML151 ms583 kB0 s
/docs/basics/inference-and-deployment/lm-studio200 HTML175 ms694 kB0 s
/docs/basics/inference-and-deployment/unsloth-inference200 HTML111 ms486 kB0 s
/docs/de/grundlagen/inference-and-deployment200 HTML102 ms487 kB0 s
/docs/zh/ji-chu-zhi-shi/inference-and-deployment200 HTML415 ms487 kB0 s
/docs/jp/ji-ben/inference-and-deployment200 HTML293 ms488 kB0 s
/docs/fr/notions-de-base/inference-and-deployment200 HTML279 ms488 kB0 s
/docs/models/qwen3.5/fine-tune200 HTML107 ms703 kB0 s
/docs/de/los-gehts/unsloth-notebooks200 HTML370 ms1 MB0 s
/docs/jp/meru/unsloth-notebooks200 HTML268 ms1 MB0 s
/docs/zh/kai-shi-shi-yong/unsloth-notebooks200 HTML260 ms1 MB0 s
/docs/fr/commencer/unsloth-notebooks200 HTML342 ms1 MB0 s
/docs/zh/bo-ke/3x-faster-training-packing200 HTML187 ms718 kB0 s
/docs/blog307 Redirect91 ms 139 B0 s
/docs/fr/blog/3x-faster-training-packing200 HTML394 ms724 kB0 s
/docs/de/blog/3x-faster-training-packing200 HTML399 ms722 kB0 s
/docs/jp/burogu/3x-faster-training-packing200 HTML371 ms726 kB0 s
/docs/jp/burogu/500k-context-length-fine-tuning200 HTML338 ms568 kB0 s
/docs/zh/bo-ke/500k-context-length-fine-tuning200 HTML341 ms561 kB0 s
/docs/de/blog/500k-context-length-fine-tuning200 HTML303 ms564 kB0 s
/docs/fr/blog/500k-context-length-fine-tuning200 HTML329 ms566 kB0 s
/docs/zh/xin-zeng/studio/export200 HTML307 ms539 kB0 s
/docs/de/neu/studio/export200 HTML311 ms541 kB0 s
/docs/fr/nouveau/studio/export200 HTML568 ms542 kB0 s
/docs/jp/xin-ji-neng/studio/export200 HTML414 ms542 kB0 s
/docs/zh/mo-xing/minimax-m25200 HTML273 ms894 kB0 s
/docs/jp/moderu/minimax-m25200 HTML326 ms898 kB0 s
/docs/fr/modeles/minimax-m25200 HTML266 ms897 kB0 s
/docs/de/modelle/minimax-m25200 HTML341 ms895 kB0 s
/docs/de/los-gehts/install200 HTML294 ms539 kB0 s
/docs/jp/meru/install200 HTML404 ms539 kB0 s
/docs/get-started/install/google-colab200 HTML310 ms480 kB0 s
/docs/zh/kai-shi-shi-yong/install200 HTML310 ms539 kB0 s
/docs/get-started/install/vs-code200 HTML103 ms628 kB0 s
/docs/fr/commencer/install200 HTML300 ms539 kB0 s
/docs/zh/mo-xing/qwen3-coder-next200 HTML136 ms1021 kB0 s
/docs/de/modelle/qwen3-coder-next200 HTML316 ms1 MB0 s
/docs/jp/moderu/qwen3-coder-next200 HTML399 ms1 MB0 s
/docs/fr/modeles/qwen3-coder-next200 HTML391 ms1 MB0 s
/docs/de/grundlagen/multi-gpu-training-with-unsloth200 HTML300 ms473 kB0 s
/docs/basics/multi-gpu-training-with-unsloth/ddp200 HTML98 ms 587 kB0 s
/docs/zh/ji-chu-zhi-shi/multi-gpu-training-with-unsloth200 HTML275 ms473 kB0 s
/docs/jp/ji-ben/multi-gpu-training-with-unsloth200 HTML336 ms475 kB0 s
/docs/fr/commencer/fine-tuning-for-beginners200 HTML253 ms486 kB0 s
/docs/fr/notions-de-base/multi-gpu-training-with-unsloth200 HTML400 ms474 kB0 s
/docs/fr/nouveau/studio200 HTML102 ms851 kB0 s
/docs/fr/notions-de-base/claude-code200 HTML102 ms990 kB0 s
/docs/fr/notions-de-base/troubleshooting-and-faqs200 HTML368 ms683 kB0 s
/docs/fr/notions-de-base/continued-pretraining200 HTML276 ms504 kB0 s
/docs/fr/modeles/tutorials/devstral-how-to-run-and-fine-tune200 HTML288 ms680 kB0 s
/docs/fr/notions-de-base/tool-calling-guide-for-local-llms200 HTML391 ms947 kB0 s
/docs/fr/blog/fine-tuning-llms-with-nvidia-dgx-spark-and-unsloth200 HTML352 ms596 kB0 s
/docs/fr/blog/fine-tuning-llms-with-blackwell-rtx-50-series-and-unsloth200 HTML91 ms 654 kB0 s
/docs/fr/blog/how-to-fine-tune-llms-with-unsloth-and-docker200 HTML356 ms674 kB0 s
/docs/fr/modeles/qwen3-how-to-run-and-fine-tune200 HTML91 ms 829 kB0 s
/docs/fr/commencer/reinforcement-learning-rl-guide200 HTML344 ms1 MB0 s
/docs/fr/nouveau/studio/data-recipe200 HTML272 ms699 kB0 s
/docs/fr/commencer/fine-tuning-for-beginners/unsloth-requirements200 HTML302 ms666 kB0 s
/docs/fr/modeles/nemotron-3200 HTML330 ms990 kB0 s
/docs/fr/notions-de-base/vision-fine-tuning200 HTML312 ms693 kB0 s
/docs/fr/commencer/fine-tuning-llms-guide200 HTML293 ms758 kB0 s
/docs/fr/nouveau/embedding-finetuning200 HTML285 ms592 kB0 s
/docs/fr/modeles/glm-5200 HTML448 ms999 kB0 s
/docs/fr/notions-de-base/text-to-speech-tts-fine-tuning200 HTML312 ms670 kB0 s
/docs/fr/notions-de-base/unsloth-benchmarks200 HTML343 ms521 kB0 s
/docs/fr/notions-de-base/chat-templates200 HTML296 ms663 kB0 s
/docs/fr/notions-de-base/codex200 HTML263 ms694 kB0 s
/docs/fr/blog/quantization-aware-training-qat200 HTML426 ms590 kB0 s
/docs/fr/commencer307 Redirect305 ms81 B0 s
/docs/fr/notions-de-base/unsloth-environment-flags200 HTML409 ms478 kB0 s
/docs/fr/commencer/fine-tuning-for-beginners/faq-+-is-fine-tuning-right-for-me200 HTML244 ms554 kB0 s
/docs/zh/ji-chu-zhi-shi/codex200 HTML109 ms691 kB0 s
/docs/de/grundlagen/codex200 HTML280 ms693 kB0 s
/docs/de/neu/embedding-finetuning200 HTML105 ms591 kB0 s
/docs/jp/ji-ben/codex200 HTML356 ms696 kB0 s
/docs/jp/xin-ji-neng/embedding-finetuning200 HTML305 ms594 kB0 s
/docs/zh/xin-zeng/embedding-finetuning200 HTML370 ms589 kB0 s
/docs/zh/ji-chu-zhi-shi/text-to-speech-tts-fine-tuning200 HTML435 ms660 kB0 s
/docs/de/grundlagen/text-to-speech-tts-fine-tuning200 HTML352 ms666 kB0 s
/docs/zh/xin-zeng/studio200 HTML96 ms 846 kB0 s
/docs/jp/ji-ben/text-to-speech-tts-fine-tuning200 HTML329 ms673 kB0 s
/docs/de/neu/studio200 HTML153 ms844 kB0 s
/docs/zh/mo-xing/glm-5200 HTML102 ms994 kB0 s
/docs/jp/xin-ji-neng/studio200 HTML395 ms857 kB0 s
/docs/jp/moderu/glm-5200 HTML377 ms1001 kB0 s
/docs/de/modelle/glm-5200 HTML218 ms997 kB0 s
/docs/de/grundlagen/claude-code200 HTML177 ms989 kB0 s
/docs/zh/ji-chu-zhi-shi/claude-code200 HTML118 ms985 kB0 s
/docs/jp/ji-ben/claude-code200 HTML386 ms984 kB0 s
/docs/zh/bo-ke/fine-tuning-llms-with-nvidia-dgx-spark-and-unsloth200 HTML366 ms594 kB0 s
/docs/jp/burogu/fine-tuning-llms-with-nvidia-dgx-spark-and-unsloth200 HTML325 ms597 kB0 s
/docs/de/blog/fine-tuning-llms-with-nvidia-dgx-spark-and-unsloth200 HTML123 ms595 kB0 s
/docs/zh/bo-ke/fine-tuning-llms-with-blackwell-rtx-50-series-and-unsloth200 HTML271 ms651 kB0 s
/docs/de/blog/fine-tuning-llms-with-blackwell-rtx-50-series-and-unsloth200 HTML153 ms654 kB0 s
/docs/jp/burogu/fine-tuning-llms-with-blackwell-rtx-50-series-and-unsloth200 HTML291 ms657 kB0 s
/docs/zh/kai-shi-shi-yong/reinforcement-learning-rl-guide200 HTML111 ms1 MB0 s
/docs/de/los-gehts/reinforcement-learning-rl-guide200 HTML302 ms1 MB0 s
/docs/jp/meru/reinforcement-learning-rl-guide200 HTML405 ms1 MB0 s
/docs/zh/bo-ke/quantization-aware-training-qat200 HTML282 ms586 kB0 s
/docs/de/blog/quantization-aware-training-qat200 HTML517 ms589 kB0 s
/docs/jp/burogu/quantization-aware-training-qat200 HTML370 ms590 kB0 s
/docs/de/los-gehts/fine-tuning-for-beginners/unsloth-requirements200 HTML372 ms664 kB0 s
/docs/jp/meru/fine-tuning-for-beginners/unsloth-requirements200 HTML332 ms667 kB0 s
/docs/zh/kai-shi-shi-yong/fine-tuning-for-beginners/unsloth-requirements200 HTML295 ms663 kB0 s
/docs/de/blog/how-to-fine-tune-llms-with-unsloth-and-docker200 HTML298 ms673 kB0 s
/docs/zh/bo-ke/how-to-fine-tune-llms-with-unsloth-and-docker200 HTML466 ms672 kB0 s
/docs/jp/burogu/how-to-fine-tune-llms-with-unsloth-and-docker200 HTML396 ms676 kB0 s
/docs/de/grundlagen/unsloth-benchmarks200 HTML281 ms520 kB0 s
/docs/get-started/fine-tuning-llms-guide/what-model-should-i-use200 HTML93 ms 515 kB0 s
/docs/zh/ji-chu-zhi-shi/unsloth-benchmarks200 HTML387 ms520 kB0 s
/docs/jp/ji-ben/unsloth-benchmarks200 HTML375 ms521 kB0 s
/docs/get-started/fine-tuning-llms-guide/lora-hyperparameters-guide200 HTML100 ms942 kB0 s
/docs/zh/kai-shi-shi-yong/fine-tuning-for-beginners200 HTML235 ms485 kB0 s
/docs/jp/meru/fine-tuning-for-beginners200 HTML384 ms486 kB0 s
/docs/de/los-gehts/fine-tuning-for-beginners200 HTML330 ms485 kB0 s
/docs/models/tutorials/nemotron-3307 Redirect310 ms111 B0 s
/docs/zh/ji-chu-zhi-shi/tool-calling-guide-for-local-llms200 HTML463 ms943 kB0 s
/docs/models/qwen3-coder-how-to-run-locally307 Redirect384 ms171 B0 s
/docs/jp/ji-ben/tool-calling-guide-for-local-llms200 HTML345 ms948 kB0 s
/docs/de/grundlagen/tool-calling-guide-for-local-llms200 HTML417 ms945 kB0 s
/docs/de/grundlagen/unsloth-environment-flags200 HTML351 ms478 kB0 s
/docs/jp/ji-ben/unsloth-environment-flags200 HTML333 ms479 kB0 s
/docs/zh/ji-chu-zhi-shi/unsloth-environment-flags200 HTML279 ms477 kB0 s
/docs/zh/ji-chu-zhi-shi/troubleshooting-and-faqs200 HTML91 ms 677 kB0 s
/docs/jp/ji-ben/troubleshooting-and-faqs200 HTML392 ms685 kB0 s
/docs/de/grundlagen/troubleshooting-and-faqs200 HTML332 ms682 kB0 s
/docs/zh/ji-chu-zhi-shi/vision-fine-tuning200 HTML283 ms688 kB0 s
/docs/basics/unsloth-dynamic-2.0-ggufs/unsloth-dynamic-ggufs-on-aider-polyglot200 HTML150 ms829 kB0 s
/docs/de/grundlagen/vision-fine-tuning200 HTML302 ms691 kB0 s
/docs/jp/ji-ben/vision-fine-tuning200 HTML298 ms695 kB0 s
/docs/jp/ji-ben/continued-pretraining200 HTML358 ms503 kB0 s
/docs/zh/ji-chu-zhi-shi/continued-pretraining200 HTML315 ms501 kB0 s
/docs/zh/mo-xing/qwen3-how-to-run-and-fine-tune200 HTML89 ms 820 kB0 s
/docs/de/grundlagen/continued-pretraining200 HTML422 ms503 kB0 s
/docs/jp/moderu/qwen3-how-to-run-and-fine-tune200 HTML342 ms833 kB0 s
/docs/de/modelle/qwen3-how-to-run-and-fine-tune200 HTML233 ms824 kB0 s
/docs/de/grundlagen/chat-templates200 HTML330 ms661 kB0 s
/docs/zh/ji-chu-zhi-shi/chat-templates200 HTML348 ms659 kB0 s
/docs/jp/ji-ben/chat-templates200 HTML434 ms664 kB0 s
/docs/jp/xin-ji-neng/studio/data-recipe200 HTML268 ms698 kB0 s
/docs/zh/xin-zeng/studio/data-recipe200 HTML292 ms693 kB0 s
/docs/de/neu/studio/data-recipe200 HTML275 ms697 kB0 s
/docs/zh/mo-xing/nemotron-3200 HTML137 ms984 kB0 s
/docs/jp/moderu/nemotron-3200 HTML356 ms992 kB0 s
/docs/de/modelle/nemotron-3200 HTML191 ms988 kB0 s
/docs/zh/kai-shi-shi-yong/fine-tuning-for-beginners/fa…-fine-tuning-right-for-me200 HTML317 ms546 kB0 s
/docs/de/los-gehts/fine-tuning-for-beginners/faq-+-is-fine-tuning-right-for-me200 HTML364 ms553 kB0 s
/docs/jp/meru/fine-tuning-for-beginners/faq-+-is-fine-tuning-right-for-me200 HTML318 ms556 kB0 s
/docs/zh/kai-shi-shi-yong307 Redirect319 ms81 B0 s
/docs/zh/mo-xing/tutorials/devstral-how-to-run-and-fine-tune200 HTML85 ms 674 kB0 s
/docs/zh/kai-shi-shi-yong/fine-tuning-llms-guide200 HTML272 ms752 kB0 s
/docs/de/los-gehts/fine-tuning-llms-guide200 HTML102 ms758 kB0 s
/docs/jp/meru/fine-tuning-llms-guide200 HTML296 ms763 kB0 s
/docs/jp/meru307 Redirect302 ms81 B0 s
/docs/de/modelle/tutorials/devstral-how-to-run-and-fine-tune200 HTML331 ms681 kB0 s
/docs/de/los-gehts307 Redirect297 ms81 B0 s
/docs/jp/moderu/tutorials/devstral-how-to-run-and-fine-tune200 HTML620 ms687 kB0 s
/docs/zh/xin-zeng/studio/install200 HTML93 ms 711 kB0 s
/docs/zh/xin-zeng/studio/start200 HTML292 ms1 MB0 s
/docs/zh/xin-zeng307 Redirect183 ms113 B0 s
/docs/de/neu/studio/start200 HTML314 ms1 MB0 s
/docs/fr/nouveau/studio/start200 HTML331 ms1 MB0 s
/docs/jp/xin-ji-neng/studio/start200 HTML549 ms1 MB0 s
/docs/jp/xin-ji-neng/studio/install200 HTML263 ms715 kB0 s
/docs/jp/xin-ji-neng307 Redirect444 ms119 B0 s
/docs/de/neu/studio/install200 HTML237 ms712 kB0 s
/docs/fr/nouveau/studio/install200 HTML248 ms713 kB0 s
/docs/fr/nouveau307 Redirect344 ms111 B0 s
/docs/fr/modeles/tutorials/qwen3-next200 HTML312 ms686 kB0 s
/docs/de/modelle/tutorials/qwen3-next200 HTML323 ms685 kB0 s
/docs/jp/moderu/tutorials/qwen3-next200 HTML312 ms688 kB0 s
/docs/zh/mo-xing/tutorials/qwen3-next200 HTML321 ms683 kB0 s
/docs/de/neu307 Redirect280 ms103 B0 s
/docs/jp/moderu/tutorials/ministral-3200 HTML272 ms756 kB0 s
/docs/de/modelle/tutorials/ministral-3200 HTML87 ms 752 kB0 s
/docs/zh/mo-xing/tutorials/ministral-3200 HTML377 ms749 kB0 s
/docs/fr/modeles/tutorials/ministral-3200 HTML370 ms754 kB0 s
/docs/de/modelle/tutorials/devstral-2200 HTML116 ms830 kB0 s
/docs/zh/mo-xing/tutorials/devstral-2200 HTML395 ms827 kB0 s
/docs/jp/moderu/tutorials/devstral-2200 HTML302 ms836 kB0 s
/docs/fr/modeles/tutorials/devstral-2200 HTML353 ms832 kB0 s
/docs/jp/moderu/tutorials/qwen-image-2512200 HTML319 ms735 kB0 s
/docs/jp/moderu/qwen3-how-to-run-and-fine-tune/qwen3-vl-how-to-run-and-fine-tune200 HTML361 ms932 kB0 s
/docs/jp/moderu/nemotron-3/nemotron-3-super200 HTML324 ms662 kB0 s
/docs/jp/moderu/tutorials/qwen3-coder-how-to-run-locally200 HTML353 ms942 kB0 s
/docs/fr/modeles/qwen3-how-to-run-and-fine-tune/qwen3-…-how-to-run-and-fine-tune200 HTML328 ms932 kB0 s
/docs/fr/modeles/tutorials/qwen-image-2512200 HTML336 ms735 kB0 s
/docs/fr/modeles/nemotron-3/nemotron-3-super200 HTML158 ms661 kB0 s
/docs/fr/modeles/tutorials/qwen3-coder-how-to-run-locally200 HTML353 ms939 kB0 s
/docs/zh/mo-xing/qwen3-how-to-run-and-fine-tune/qwen3-…-how-to-run-and-fine-tune200 HTML311 ms927 kB0 s
/docs/blog/comfyui200 HTML188 ms642 kB0 s
/docs/zh/mo-xing/tutorials/qwen-image-2512200 HTML86 ms 727 kB0 s
/docs/de/modelle/qwen3-how-to-run-and-fine-tune/qwen3-…-how-to-run-and-fine-tune200 HTML363 ms929 kB0 s
/docs/zh/mo-xing/nemotron-3/nemotron-3-super200 HTML105 ms658 kB0 s
/docs/de/modelle/nemotron-3/nemotron-3-super200 HTML101 ms660 kB0 s
/docs/de/modelle/tutorials/qwen-image-2512200 HTML339 ms731 kB0 s
/docs/de/modelle/tutorials/qwen3-coder-how-to-run-locally200 HTML132 ms937 kB0 s
/docs/zh/mo-xing/tutorials/qwen3-coder-how-to-run-locally200 HTML430 ms934 kB0 s
/docs/fr/commencer/install/mac200 HTML313 ms495 kB0 s
/docs/de/los-gehts/install/mac200 HTML455 ms495 kB0 s
/docs/jp/meru/install/mac200 HTML292 ms496 kB0 s
/docs/zh/kai-shi-shi-yong/install/mac200 HTML267 ms495 kB0 s
/docs/zh/kai-shi-shi-yong/install/updating200 HTML86 ms 481 kB0 s
/docs/fr/commencer/install/updating200 HTML413 ms482 kB0 s
/docs/de/los-gehts/install/updating200 HTML375 ms481 kB0 s
/docs/jp/meru/install/updating200 HTML260 ms482 kB0 s
/docs/fr/commencer/install/docker200 HTML279 ms715 kB0 s
/docs/de/los-gehts/install/docker200 HTML280 ms714 kB0 s
/docs/zh/kai-shi-shi-yong/install/docker200 HTML410 ms713 kB0 s
/docs/jp/meru/install/docker200 HTML361 ms715 kB0 s
/docs/zh/kai-shi-shi-yong/install/intel200 HTML320 ms730 kB0 s
/docs/jp/meru/install/intel200 HTML297 ms734 kB0 s
/docs/fr/commencer/install/intel200 HTML263 ms733 kB0 s
/docs/de/los-gehts/install/intel200 HTML372 ms732 kB0 s
/docs/jp/meru/install/amd200 HTML322 ms669 kB0 s
/docs/de/los-gehts/install/amd200 HTML331 ms667 kB0 s
/docs/fr/commencer/install/amd200 HTML261 ms668 kB0 s
/docs/zh/kai-shi-shi-yong/install/amd200 HTML360 ms666 kB0 s
/docs/de/grundlagen307 Redirect310 ms153 B0 s
/docs/fr/notions-de-base307 Redirect252 ms163 B0 s
/docs/jp/ji-ben307 Redirect248 ms145 B0 s
/docs/jp/moderu/gpt-oss-how-to-run-and-fine-tune/long-context-gpt-oss-training200 HTML339 ms872 kB0 s
/docs/zh/mo-xing/gpt-oss-how-to-run-and-fine-tune/long-context-gpt-oss-training200 HTML373 ms860 kB0 s
/docs/fr/modeles/gpt-oss-how-to-run-and-fine-tune/long-context-gpt-oss-training200 HTML356 ms867 kB0 s
/docs/de/modelle/gpt-oss-how-to-run-and-fine-tune/long-context-gpt-oss-training200 HTML301 ms865 kB0 s
/docs/zh/ji-chu-zhi-shi307 Redirect305 ms161 B0 s
/docs/zh/mo-xing/tutorials/deepseek-r1-0528-how-to-run-locally200 HTML105 ms815 kB0 s
/docs/jp/moderu/tutorials/deepseek-r1-0528-how-to-run-locally200 HTML339 ms824 kB0 s
/docs/de/modelle/tutorials/deepseek-r1-0528-how-to-run-locally200 HTML337 ms820 kB0 s
/docs/fr/modeles/gpt-oss-how-to-run-and-fine-tune/gpt-oss-reinforcement-learning200 HTML350 ms691 kB0 s
/docs/fr/modeles/tutorials/deepseek-r1-0528-how-to-run-locally200 HTML548 ms821 kB0 s
/docs/fr/notions-de-base/troubleshooting-and-faqs/hugging-face-hub-xet-debugging200 HTML101 ms470 kB0 s
/docs/fr/commencer/fine-tuning-llms-guide/datasets-guide200 HTML276 ms979 kB0 s
/docs/fr/modeles307 Redirect104 ms113 B0 s
/docs/jp/moderu/gpt-oss-how-to-run-and-fine-tune/gpt-oss-reinforcement-learning200 HTML393 ms691 kB0 s
/docs/fr/modeles/gpt-oss-how-to-run-and-fine-tune/tuto…-how-to-fine-tune-gpt-oss200 HTML510 ms1 MB0 s
/docs/models/gpt-oss-how-to-run-and-fine-tune/gpt-oss-…-to-train-gpt-oss-with-rl200 HTML119 ms1 MB0 s
/docs/de/modelle/gpt-oss-how-to-run-and-fine-tune/gpt-oss-reinforcement-learning200 HTML633 ms688 kB0 s
/docs/zh/mo-xing/gpt-oss-how-to-run-and-fine-tune/gpt-oss-reinforcement-learning200 HTML298 ms684 kB0 s
/docs/de/modelle/gpt-oss-how-to-run-and-fine-tune/tuto…-how-to-fine-tune-gpt-oss200 HTML267 ms1 MB0 s
/docs/zh/mo-xing/gpt-oss-how-to-run-and-fine-tune/tuto…-how-to-fine-tune-gpt-oss200 HTML381 ms1 MB0 s
/docs/jp/moderu/gpt-oss-how-to-run-and-fine-tune/tutor…-how-to-fine-tune-gpt-oss200 HTML338 ms1 MB0 s
/docs/zh/ji-chu-zhi-shi/troubleshooting-and-faqs/hugging-face-hub-xet-debugging200 HTML363 ms470 kB0 s
/docs/de/grundlagen/troubleshooting-and-faqs/hugging-face-hub-xet-debugging200 HTML313 ms470 kB0 s
/docs/zh/mo-xing307 Redirect337 ms113 B0 s
/docs/jp/ji-ben/troubleshooting-and-faqs/hugging-face-hub-xet-debugging200 HTML561 ms471 kB0 s
/docs/zh/kai-shi-shi-yong/fine-tuning-llms-guide/datasets-guide200 HTML350 ms963 kB0 s
/docs/jp/meru/fine-tuning-llms-guide/datasets-guide200 HTML302 ms978 kB0 s
/docs/de/modelle307 Redirect93 ms 113 B0 s
/docs/de/los-gehts/fine-tuning-llms-guide/datasets-guide200 HTML362 ms974 kB0 s
/docs/zh/mo-xing/tutorials/deepseek-ocr-how-to-run-and-fine-tune200 HTML301 ms624 kB0 s
/docs/de/modelle/tutorials/deepseek-ocr-how-to-run-and-fine-tune200 HTML321 ms626 kB0 s
/docs/fr/modeles/tutorials/deepseek-ocr-how-to-run-and-fine-tune200 HTML296 ms627 kB0 s
/docs/jp/moderu/tutorials/deepseek-ocr-how-to-run-and-fine-tune200 HTML283 ms628 kB0 s
/docs/jp/moderu307 Redirect312 ms111 B0 s
/docs/fr/commencer/reinforcement-learning-rl-guide/tut…reasoning-model-with-grpo200 HTML335 ms807 kB0 s
/docs/jp/meru/reinforcement-learning-rl-guide/tutorial…reasoning-model-with-grpo200 HTML438 ms807 kB0 s
/docs/de/los-gehts/reinforcement-learning-rl-guide/tut…reasoning-model-with-grpo200 HTML442 ms794 kB0 s
/docs/zh/kai-shi-shi-yong/reinforcement-learning-rl-gu…reasoning-model-with-grpo200 HTML341 ms788 kB0 s
/docs/fr/modeles/tutorials/deepseek-r1-how-to-run-locally200 HTML333 ms818 kB0 s
/docs/jp/moderu/tutorials/deepseek-r1-how-to-run-locally200 HTML431 ms822 kB0 s
/docs/de/modelle/tutorials/deepseek-r1-how-to-run-locally200 HTML382 ms817 kB0 s
/docs/zh/mo-xing/tutorials/deepseek-r1-how-to-run-locally200 HTML267 ms814 kB0 s
/docs/zh/mo-xing/tutorials/functiongemma200 HTML331 ms843 kB0 s
/docs/zh/mo-xing/tutorials/phi-4-reasoning-how-to-run-and-fine-tune200 HTML341 ms604 kB0 s
/docs/zh/mo-xing/tutorials/llama-4-how-to-run-and-fine-tune200 HTML192 ms725 kB0 s
/docs/zh/mo-xing/tutorials/grok-2200 HTML508 ms681 kB0 s
/docs/zh/mo-xing/tutorials/kimi-k2-thinking-how-to-run-locally200 HTML262 ms920 kB0 s
/docs/zh/mo-xing/tutorials/deepseek-ocr-2200 HTML291 ms677 kB0 s
/docs/zh/kai-shi-shi-yong/fine-tuning-llms-guide/tutor…llama-3-and-use-in-ollama200 HTML340 ms1 MB0 s
/docs/zh/mo-xing/tutorials/magistral-how-to-run-and-fine-tune200 HTML185 ms1 MB0 s
/docs/zh/mo-xing/tutorials/deepseek-v3-0324-how-to-run-locally200 HTML367 ms1 MB0 s
/docs/zh/mo-xing/tutorials/gemma-3-how-to-run-and-fine-tune200 HTML274 ms766 kB0 s
/docs/zh/mo-xing/tutorials/gemma-3-how-to-run-and-fine…-how-to-run-and-fine-tune200 HTML303 ms850 kB0 s
/docs/zh/mo-xing/qwen3-how-to-run-and-fine-tune/qwen3-2507200 HTML576 ms909 kB0 s
/docs/zh/mo-xing/tutorials/how-to-run-llms-with-docker200 HTML411 ms708 kB0 s
/docs/zh/mo-xing/tutorials/qwq-32b-how-to-run-effectively200 HTML347 ms926 kB0 s
/docs/zh/kai-shi-shi-yong/reinforcement-learning-rl-gu…forcement-learning-vlm-rl200 HTML396 ms689 kB0 s
/docs/de/modelle/tutorials/functiongemma200 HTML344 ms847 kB0 s
/docs/de/modelle/tutorials/kimi-k2-thinking-how-to-run-locally200 HTML304 ms926 kB0 s
/docs/de/modelle/tutorials/gemma-3-how-to-run-and-fine-tune200 HTML253 ms769 kB0 s
/docs/de/modelle/tutorials/llama-4-how-to-run-and-fine-tune200 HTML301 ms728 kB0 s
/docs/de/modelle/tutorials/deepseek-v3-0324-how-to-run-locally200 HTML341 ms1 MB0 s
/docs/de/modelle/tutorials/deepseek-ocr-2200 HTML308 ms678 kB0 s
/docs/de/modelle/tutorials/grok-2200 HTML436 ms683 kB0 s
/docs/de/modelle/tutorials/qwq-32b-how-to-run-effectively200 HTML346 ms933 kB0 s
/docs/de/modelle/tutorials/gemma-3-how-to-run-and-fine…-how-to-run-and-fine-tune200 HTML411 ms857 kB0 s
/docs/de/modelle/tutorials/phi-4-reasoning-how-to-run-and-fine-tune200 HTML306 ms606 kB0 s
/docs/de/modelle/tutorials/how-to-run-llms-with-docker200 HTML341 ms712 kB0 s
/docs/de/modelle/tutorials/magistral-how-to-run-and-fine-tune200 HTML348 ms1 MB0 s
/docs/de/modelle/qwen3-how-to-run-and-fine-tune/qwen3-2507200 HTML415 ms914 kB0 s
/docs/de/los-gehts/reinforcement-learning-rl-guide/vis…forcement-learning-vlm-rl200 HTML363 ms692 kB0 s
/docs/de/los-gehts/fine-tuning-llms-guide/tutorial-how…llama-3-and-use-in-ollama200 HTML300 ms1 MB0 s
/docs/fr/commencer/reinforcement-learning-rl-guide/vis…forcement-learning-vlm-rl200 HTML416 ms694 kB0 s
/docs/jp/meru/reinforcement-learning-rl-guide/vision-r…forcement-learning-vlm-rl200 HTML383 ms694 kB0 s
/docs/jp/moderu/tutorials/how-to-run-llms-with-docker200 HTML381 ms715 kB0 s
/docs/fr/modeles/tutorials/how-to-run-llms-with-docker200 HTML302 ms713 kB0 s
/docs/fr/modeles/tutorials/qwq-32b-how-to-run-effectively200 HTML272 ms936 kB0 s
/docs/jp/moderu/tutorials/qwq-32b-how-to-run-effectively200 HTML370 ms936 kB0 s
/docs/jp/meru/fine-tuning-llms-guide/tutorial-how-to-f…llama-3-and-use-in-ollama200 HTML315 ms1 MB0 s
/docs/fr/commencer/fine-tuning-llms-guide/tutorial-how…llama-3-and-use-in-ollama200 HTML343 ms1 MB0 s
/docs/fr/modeles/tutorials/phi-4-reasoning-how-to-run-and-fine-tune200 HTML267 ms607 kB0 s
/docs/jp/moderu/tutorials/phi-4-reasoning-how-to-run-and-fine-tune200 HTML327 ms609 kB0 s
/docs/fr/modeles/tutorials/deepseek-v3-0324-how-to-run-locally200 HTML275 ms1 MB0 s
/docs/jp/moderu/tutorials/deepseek-v3-0324-how-to-run-locally200 HTML449 ms1 MB0 s
/docs/fr/modeles/tutorials/gemma-3-how-to-run-and-fine…-how-to-run-and-fine-tune200 HTML375 ms857 kB0 s
/docs/fr/modeles/tutorials/kimi-k2-thinking-how-to-run-locally200 HTML416 ms928 kB0 s
/docs/fr/modeles/tutorials/llama-4-how-to-run-and-fine-tune200 HTML320 ms730 kB0 s
/docs/fr/modeles/tutorials/gemma-3-how-to-run-and-fine-tune200 HTML354 ms771 kB0 s
/docs/fr/modeles/tutorials/grok-2200 HTML324 ms684 kB0 s
/docs/fr/modeles/qwen3-how-to-run-and-fine-tune/qwen3-2507200 HTML269 ms917 kB0 s
/docs/fr/modeles/tutorials/functiongemma200 HTML289 ms848 kB0 s
/docs/fr/modeles/tutorials/deepseek-ocr-2200 HTML352 ms679 kB0 s
/docs/fr/modeles/tutorials/magistral-how-to-run-and-fine-tune200 HTML367 ms1 MB0 s
/docs/jp/moderu/tutorials/kimi-k2-thinking-how-to-run-locally200 HTML453 ms935 kB0 s
/docs/jp/moderu/tutorials/grok-2200 HTML407 ms685 kB0 s
/docs/jp/moderu/tutorials/magistral-how-to-run-and-fine-tune200 HTML373 ms1 MB0 s
/docs/jp/moderu/tutorials/deepseek-ocr-2200 HTML388 ms680 kB0 s
/docs/jp/moderu/tutorials/gemma-3-how-to-run-and-fine-…-how-to-run-and-fine-tune200 HTML376 ms861 kB0 s
/docs/jp/moderu/tutorials/functiongemma200 HTML361 ms851 kB0 s
/docs/jp/moderu/tutorials/gemma-3-how-to-run-and-fine-tune200 HTML338 ms773 kB0 s
/docs/jp/moderu/qwen3-how-to-run-and-fine-tune/qwen3-2507200 HTML337 ms920 kB0 s
/docs/jp/moderu/tutorials/llama-4-how-to-run-and-fine-tune200 HTML314 ms731 kB0 s
/docs/zh/kai-shi-shi-yong/install/conda-install200 HTML292 ms471 kB0 s
/docs/jp/meru/install/conda-install200 HTML284 ms473 kB0 s
/docs/fr/commencer/install/conda-install200 HTML410 ms472 kB0 s
/docs/de/los-gehts/install/conda-install200 HTML304 ms472 kB0 s
/docs/de/los-gehts/reinforcement-learning-rl-guide/advanced-rl-documentation200 HTML298 ms814 kB0 s
/docs/jp/meru/reinforcement-learning-rl-guide/advanced-rl-documentation200 HTML365 ms818 kB0 s
/docs/get-started/reinforcement-learning-rl-guide/adva…ation/fp16-vs-bf16-for-rl200 HTML356 ms530 kB0 s
/docs/get-started/reinforcement-learning-rl-guide/adva…ntation/rl-reward-hacking200 HTML315 ms471 kB0 s
/docs/fr/commencer/reinforcement-learning-rl-guide/advanced-rl-documentation200 HTML306 ms814 kB0 s
/docs/zh/kai-shi-shi-yong/reinforcement-learning-rl-gu…advanced-rl-documentation200 HTML480 ms814 kB0 s
/docs/de/los-gehts/reinforcement-learning-rl-guide/adv…po-reinforcement-learning200 HTML331 ms505 kB0 s
/docs/jp/meru/reinforcement-learning-rl-guide/advanced…po-reinforcement-learning200 HTML318 ms506 kB0 s
/docs/zh/kai-shi-shi-yong/reinforcement-learning-rl-gu…po-reinforcement-learning200 HTML302 ms504 kB0 s
/docs/fr/commencer/reinforcement-learning-rl-guide/adv…po-reinforcement-learning200 HTML451 ms506 kB0 s
/docs/fr/commencer/reinforcement-learning-rl-guide/preference-dpo-orpo-and-kto200 HTML308 ms630 kB0 s
/docs/jp/meru/reinforcement-learning-rl-guide/preference-dpo-orpo-and-kto200 HTML282 ms631 kB0 s
/docs/zh/kai-shi-shi-yong/reinforcement-learning-rl-gu…eference-dpo-orpo-and-kto200 HTML293 ms629 kB0 s
/docs/de/los-gehts/reinforcement-learning-rl-guide/preference-dpo-orpo-and-kto200 HTML338 ms629 kB0 s
/docs/zh/kai-shi-shi-yong/reinforcement-learning-rl-guide/grpo-long-context200 HTML431 ms749 kB0 s
/docs/jp/meru/reinforcement-learning-rl-guide/grpo-long-context200 HTML378 ms757 kB0 s
/docs/fr/commencer/reinforcement-learning-rl-guide/grpo-long-context200 HTML386 ms759 kB0 s
/docs/de/los-gehts/reinforcement-learning-rl-guide/grpo-long-context200 HTML269 ms753 kB0 s
/docs/zh/kai-shi-shi-yong/reinforcement-learning-rl-guide/memory-efficient-rl200 HTML348 ms901 kB0 s
/docs/zh/ji-chu-zhi-shi/inference-and-deployment/sglang-guide200 HTML184 ms752 kB0 s
/docs/zh/ji-chu-zhi-shi/inference-and-deployment/vllm-guide200 HTML355 ms551 kB0 s
/docs/fr/commencer/reinforcement-learning-rl-guide/memory-efficient-rl200 HTML97 ms 907 kB0 s
/docs/fr/notions-de-base/inference-and-deployment/sglang-guide200 HTML294 ms755 kB0 s
/docs/fr/notions-de-base/inference-and-deployment/vllm-guide200 HTML301 ms553 kB0 s
/docs/de/grundlagen/inference-and-deployment/vllm-guide200 HTML278 ms552 kB0 s
/docs/de/los-gehts/reinforcement-learning-rl-guide/memory-efficient-rl200 HTML393 ms905 kB0 s
/docs/basics/inference-and-deployment/lm-studio/how-to…dio-cli-in-linux-terminal200 HTML97 ms 616 kB0 s
/docs/de/grundlagen/inference-and-deployment/sglang-guide200 HTML511 ms754 kB0 s
/docs/jp/meru/reinforcement-learning-rl-guide/memory-efficient-rl200 HTML477 ms907 kB0 s
/docs/basics/inference-and-deployment/saving-to-gguf/speculative-decoding200 HTML113 ms543 kB0 s
/docs/jp/ji-ben/inference-and-deployment/sglang-guide200 HTML281 ms757 kB0 s
/docs/basics/inference-and-deployment/vllm-guide/vllm-engine-arguments200 HTML127 ms532 kB0 s
/docs/basics/inference-and-deployment/vllm-guide/lora-hot-swapping-guide200 HTML100 ms512 kB0 s
/docs/jp/ji-ben/inference-and-deployment/vllm-guide200 HTML317 ms553 kB0 s
/docs/de/grundlagen/inference-and-deployment/troubleshooting-inference200 HTML97 ms 495 kB0 s
/docs/fr/notions-de-base/inference-and-deployment/troubleshooting-inference200 HTML299 ms496 kB0 s
/docs/zh/ji-chu-zhi-shi/inference-and-deployment/troubleshooting-inference200 HTML366 ms494 kB0 s
/docs/jp/ji-ben/inference-and-deployment/troubleshooting-inference200 HTML255 ms496 kB0 s
/docs/fr/notions-de-base/inference-and-deployment/llam…erver-and-openai-endpoint200 HTML241 ms584 kB0 s
/docs/de/grundlagen/inference-and-deployment/llama-server-and-openai-endpoint200 HTML112 ms583 kB0 s
/docs/zh/ji-chu-zhi-shi/inference-and-deployment/llama…erver-and-openai-endpoint200 HTML287 ms582 kB0 s
/docs/jp/ji-ben/inference-and-deployment/llama-server-and-openai-endpoint200 HTML299 ms585 kB0 s
/docs/jp/ji-ben/inference-and-deployment/saving-to-gguf200 HTML247 ms704 kB0 s
/docs/zh/ji-chu-zhi-shi/inference-and-deployment/saving-to-gguf200 HTML156 ms689 kB0 s
/docs/de/grundlagen/inference-and-deployment/saving-to-gguf200 HTML450 ms693 kB0 s
/docs/fr/notions-de-base/inference-and-deployment/saving-to-gguf200 HTML350 ms705 kB0 s
/docs/fr/notions-de-base/inference-and-deployment/deploy-llms-phone200 HTML374 ms1 MB0 s
/docs/de/grundlagen/inference-and-deployment/deploy-llms-phone200 HTML472 ms1 MB0 s
/docs/zh/ji-chu-zhi-shi/inference-and-deployment/deploy-llms-phone200 HTML404 ms1 MB0 s
/docs/jp/ji-ben/inference-and-deployment/deploy-llms-phone200 HTML379 ms1 MB0 s
/docs/fr/notions-de-base/inference-and-deployment/saving-to-ollama200 HTML303 ms590 kB0 s
/docs/de/grundlagen/inference-and-deployment/saving-to-ollama200 HTML327 ms589 kB0 s
/docs/zh/ji-chu-zhi-shi/inference-and-deployment/saving-to-ollama200 HTML354 ms588 kB0 s
/docs/jp/ji-ben/inference-and-deployment/saving-to-ollama200 HTML297 ms591 kB0 s
/docs/de/grundlagen/inference-and-deployment/lm-studio200 HTML309 ms702 kB0 s
/docs/zh/ji-chu-zhi-shi/inference-and-deployment/lm-studio200 HTML186 ms700 kB0 s
/docs/fr/notions-de-base/inference-and-deployment/lm-studio200 HTML363 ms703 kB0 s
/docs/jp/ji-ben/inference-and-deployment/lm-studio200 HTML328 ms703 kB0 s
/docs/fr/notions-de-base/inference-and-deployment/unsloth-inference200 HTML298 ms492 kB0 s
/docs/jp/ji-ben/inference-and-deployment/unsloth-inference200 HTML303 ms492 kB0 s
/docs/zh/ji-chu-zhi-shi/inference-and-deployment/unsloth-inference200 HTML285 ms491 kB0 s
/docs/zh/mo-xing/qwen3.5/fine-tune200 HTML110 ms711 kB0 s
/docs/de/grundlagen/inference-and-deployment/unsloth-inference200 HTML339 ms491 kB0 s
/docs/de/modelle/qwen3.5/fine-tune200 HTML134 ms713 kB0 s
/docs/models/qwen3.5/gguf-benchmarks200 HTML113 ms941 kB0 s
/docs/fr/modeles/qwen3.5/fine-tune200 HTML96 ms 714 kB0 s
/docs/jp/moderu/qwen3.5/fine-tune200 HTML463 ms716 kB0 s
/docs/zh/bo-ke307 Redirect259 ms147 B0 s
/docs/fr/blog307 Redirect282 ms145 B0 s
/docs/de/blog307 Redirect315 ms145 B0 s
/docs/de/los-gehts/install/vs-code200 HTML128 ms635 kB0 s
/docs/de/los-gehts/install/google-colab200 HTML299 ms486 kB0 s
/docs/jp/meru/install/vs-code200 HTML293 ms637 kB0 s
/docs/jp/burogu307 Redirect672 ms149 B0 s
/docs/jp/meru/install/google-colab200 HTML364 ms488 kB0 s
/docs/zh/kai-shi-shi-yong/install/google-colab200 HTML311 ms485 kB0 s
/docs/fr/commencer/install/google-colab200 HTML266 ms487 kB0 s
/docs/zh/kai-shi-shi-yong/install/vs-code200 HTML304 ms633 kB0 s
/docs/de/grundlagen/multi-gpu-training-with-unsloth/ddp200 HTML273 ms593 kB0 s
/docs/fr/commencer/install/vs-code200 HTML424 ms635 kB0 s
/docs/zh/ji-chu-zhi-shi/multi-gpu-training-with-unsloth/ddp200 HTML289 ms591 kB0 s
/docs/jp/ji-ben/multi-gpu-training-with-unsloth/ddp200 HTML294 ms596 kB0 s
/docs/fr/commencer/fine-tuning-llms-guide/lora-hyperparameters-guide200 HTML283 ms958 kB0 s
/docs/fr/commencer/fine-tuning-llms-guide/what-model-should-i-use200 HTML266 ms523 kB0 s
/docs/fr/notions-de-base/unsloth-dynamic-2.0-ggufs/uns…c-ggufs-on-aider-polyglot200 HTML298 ms838 kB0 s
/docs/de/los-gehts/fine-tuning-llms-guide/what-model-should-i-use200 HTML156 ms522 kB0 s
/docs/zh/kai-shi-shi-yong/fine-tuning-llms-guide/what-model-should-i-use200 HTML238 ms520 kB0 s
/docs/jp/meru/fine-tuning-llms-guide/what-model-should-i-use200 HTML310 ms523 kB0 s
/docs/de/los-gehts/fine-tuning-llms-guide/lora-hyperparameters-guide200 HTML352 ms954 kB0 s
/docs/zh/kai-shi-shi-yong/fine-tuning-llms-guide/lora-hyperparameters-guide200 HTML108 ms946 kB0 s
/docs/jp/meru/fine-tuning-llms-guide/lora-hyperparameters-guide200 HTML451 ms960 kB0 s
/docs/zh/ji-chu-zhi-shi/unsloth-dynamic-2.0-ggufs/unsl…c-ggufs-on-aider-polyglot200 HTML294 ms835 kB0 s
/docs/de/grundlagen/unsloth-dynamic-2.0-ggufs/unsloth-…c-ggufs-on-aider-polyglot200 HTML292 ms836 kB0 s
/docs/jp/ji-ben/unsloth-dynamic-2.0-ggufs/unsloth-dyna…c-ggufs-on-aider-polyglot200 HTML382 ms840 kB0 s
/docs/jp/burogu/comfyui200 HTML355 ms653 kB0 s
/docs/zh/bo-ke/comfyui200 HTML112 ms645 kB0 s
/docs/fr/blog/comfyui200 HTML272 ms652 kB0 s
/docs/fr/notions-de-base/multi-gpu-training-with-unsloth/ddp200 HTML3 s 596 kB0 s
/docs/de/blog/comfyui200 HTML293 ms650 kB0 s
/docs/fr/modeles/gpt-oss-how-to-run-and-fine-tune/gpt-…-to-train-gpt-oss-with-rl200 HTML301 ms1 MB0 s
/docs/jp/moderu/gpt-oss-how-to-run-and-fine-tune/gpt-o…-to-train-gpt-oss-with-rl200 HTML316 ms1 MB0 s
/docs/de/modelle/gpt-oss-how-to-run-and-fine-tune/gpt-…-to-train-gpt-oss-with-rl200 HTML371 ms1 MB0 s
/docs/zh/mo-xing/gpt-oss-how-to-run-and-fine-tune/gpt-…-to-train-gpt-oss-with-rl200 HTML300 ms1 MB0 s
/docs/de/los-gehts/reinforcement-learning-rl-guide/adv…ation/fp16-vs-bf16-for-rl200 HTML355 ms535 kB0 s
/docs/jp/meru/reinforcement-learning-rl-guide/advanced…ation/fp16-vs-bf16-for-rl200 HTML310 ms536 kB0 s
/docs/jp/meru/reinforcement-learning-rl-guide/advanced…ntation/rl-reward-hacking200 HTML367 ms478 kB0 s
/docs/de/los-gehts/reinforcement-learning-rl-guide/adv…ntation/rl-reward-hacking200 HTML681 ms477 kB0 s
/docs/fr/commencer/reinforcement-learning-rl-guide/adv…ation/fp16-vs-bf16-for-rl200 HTML311 ms536 kB0 s
/docs/zh/kai-shi-shi-yong/reinforcement-learning-rl-gu…ation/fp16-vs-bf16-for-rl200 HTML281 ms534 kB0 s
/docs/fr/commencer/reinforcement-learning-rl-guide/adv…ntation/rl-reward-hacking200 HTML346 ms478 kB0 s
/docs/zh/ji-chu-zhi-shi/inference-and-deployment/lm-st…dio-cli-in-linux-terminal200 HTML233 ms622 kB0 s
/docs/zh/kai-shi-shi-yong/reinforcement-learning-rl-gu…ntation/rl-reward-hacking200 HTML550 ms475 kB0 s
/docs/zh/ji-chu-zhi-shi/inference-and-deployment/vllm-…e/lora-hot-swapping-guide200 HTML266 ms517 kB0 s
/docs/zh/ji-chu-zhi-shi/inference-and-deployment/vllm-…ide/vllm-engine-arguments200 HTML264 ms536 kB0 s
/docs/zh/ji-chu-zhi-shi/inference-and-deployment/savin…gguf/speculative-decoding200 HTML300 ms548 kB0 s
/docs/fr/notions-de-base/inference-and-deployment/lm-s…dio-cli-in-linux-terminal200 HTML329 ms623 kB0 s
/docs/fr/notions-de-base/inference-and-deployment/savi…gguf/speculative-decoding200 HTML288 ms549 kB0 s
/docs/fr/notions-de-base/inference-and-deployment/vllm…e/lora-hot-swapping-guide200 HTML315 ms518 kB0 s
/docs/fr/notions-de-base/inference-and-deployment/vllm…ide/vllm-engine-arguments200 HTML317 ms538 kB0 s
/docs/de/grundlagen/inference-and-deployment/vllm-guide/lora-hot-swapping-guide200 HTML356 ms517 kB0 s
/docs/de/grundlagen/inference-and-deployment/saving-to-gguf/speculative-decoding200 HTML302 ms548 kB0 s
/docs/de/grundlagen/inference-and-deployment/vllm-guide/vllm-engine-arguments200 HTML316 ms538 kB0 s
/docs/de/grundlagen/inference-and-deployment/lm-studio…dio-cli-in-linux-terminal200 HTML301 ms622 kB0 s
/docs/jp/ji-ben/inference-and-deployment/lm-studio/how…dio-cli-in-linux-terminal200 HTML318 ms624 kB0 s
/docs/jp/ji-ben/inference-and-deployment/saving-to-gguf/speculative-decoding200 HTML333 ms549 kB0 s
/docs/jp/ji-ben/inference-and-deployment/vllm-guide/vllm-engine-arguments200 HTML305 ms539 kB0 s
/docs/jp/ji-ben/inference-and-deployment/vllm-guide/lora-hot-swapping-guide200 HTML269 ms518 kB0 s
/docs/zh/mo-xing/qwen3.5/gguf-benchmarks200 HTML296 ms946 kB0 s
/docs/de/modelle/qwen3.5/gguf-benchmarks200 HTML354 ms948 kB0 s
/docs/jp/moderu/qwen3.5/gguf-benchmarks200 HTML267 ms950 kB0 s
/docs/fr/modeles/qwen3.5/gguf-benchmarks200 HTML479 ms949 kB0 s
No rows found, please edit your search term.

Best practices

Found 11 row(s).
Analysis nameOKNoticeWarningCritical
Invalid inline SVGs2549000
Non-clickable phone numbers15000
DOM depth (> 30)5600150
Large inline SVGs (> 5120 B)2549000
Duplicate inline SVGs (> 5 and > 1024 B)2549000
Heading structure62006200
Title uniqueness (> 10%)559000
Description uniqueness (> 10%)521000
Brotli support005750
WebP support0010
AVIF support0010
No rows found, please edit your search term.

Large inline SVGs

No problems found.


Duplicate inline SVGs

No problems found.


Invalid inline SVGs

No problems found.


Missing quotes on attributes

No problems found.


DOM depth

SeverityOccursDetailAffected URLs (max 5)
warning5The DOM depth exceeds the warning limit: 30. Found depth: 30.URL 1, URL 2, URL 3, URL 4, URL 5
warning5The DOM depth exceeds the warning limit: 30. Found depth: 34.URL 1, URL 2, URL 3, URL 4, URL 5
warning5The DOM depth exceeds the warning limit: 30. Found depth: 36.URL 1, URL 2, URL 3, URL 4, URL 5

Heading structure

SeverityOccursDetailAffected URLs (max 5)
warning280Heading structure is skipping levels: found an <h4> after an <h2>.URL 1, URL 2, URL 3, URL 4, URL 5
warning265Heading structure is skipping levels: found an <h3> after an <h1>.URL 1, URL 2, URL 3, URL 4, URL 5
warning130Heading structure is skipping levels: found an <h4> after an <h1>.URL 1, URL 2, URL 3, URL 4, URL 5

Non-clickable phone numbers

No problems found.


Title uniqueness

No problems found.


Description uniqueness

No problems found.

Accessibility

Analysis nameOKNoticeWarningCritical
Missing image alt attributes309024150
Missing form labels0050
Missing aria labels39901050
Missing html lang attribute5000
Missing roles0080

Valid HTML

No problems found.


Missing image alt attributes

SeverityOccursDetailAffected URLs (max 5)
warning2375<img class="block" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning345<img class="block rounded-*" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning80<img class="inline" *** >URL 1, URL 2, URL 3, URL 4, URL 5

Missing form labels

SeverityOccursDetailAffected URLs (max 5)
warning575<input class="peer max-* grow shrink resize-* leading-* text-* outline-* placeholder:text-* placeholder-* aria-* -* p-*" name="search-input" *** >URL 1, URL 2, URL 3, URL 4, URL 5

Missing aria labels

Found 95 row(s).
SeverityOccursDetailAffected URLs (max 5)
warning27685<a class="group/toclink toclink relative transition-* flex flex-* justify-* items-* gap-* circular-* rounded-* straight-* p-* pl-* text-* font-* text-* text-* hover:bg-* hover:text-* contrast-* contrast-* contrast-* contrast-* before:contents[] before:-* before:absolute before:inset-* sidebar-* sidebar-* [&+div_* [&+div_* [&+div_*" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning12164<a class="underline decoration-* underline-* links-* links-* links-* hover:links-* contrast-* contrast-* links-* hover:links-* hover:links-* transition-* duration-*" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning8670<button class="button group/button inline-* items-* gap-* rounded-* straight-* circular-* border-* hover:border-* disabled:border-* depth-* hover:depth-* focus-* active:depth-* dark:shadow-* not-* contrast-* contrast-* contrast-* contrast-* contrast-* contrast-* hover:depth-* focus-* data-* active:depth-* transition-* grow-* shrink-* truncate max-* align-* disabled:cursor-* disabled:translate-* disabled:shadow-* bg-* border-* contrast-* shadow-* translate-* hover:not-* hover:not-* focus-* focus-* aria-* aria-* contrast-* disabled:text-* disabled:bg-* p-* text-* rounded-* ml-* text-* hover:bg-*" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning5175<a class="font-* text-* links-* links-* contrast-* contrast-* underline-* links-* links-* links-* links-*" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning4550<button class="button group/button inline-* items-* gap-* rounded-* straight-* circular-* border border-* hover:border-* disabled:border-* depth-* hover:depth-* focus-* active:depth-* shadow-* dark:shadow-* not-* contrast-* contrast-* contrast-* contrast-* contrast-* contrast-* hover:depth-* focus-* data-* active:depth-* transition-* grow-* shrink-* truncate max-* align-* disabled:cursor-* disabled:translate-* disabled:shadow-* bg-* depth-* text-* hover:bg-* hover:not-* hover:not-* contrast-* disabled:bg-* disabled:text-* p-* text-* rounded-* px-* absolute top-* right-* z-* self-* justify-* font-* leading-* opacity-* backdrop-* group-* translate-* print:hidden" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning1740<a class="relative z-* text-* w-* py-* px-* transition-* motion-* duration-* rounded-* straight-* circular-* sidebar-* hover:bg-* theme-* hover:text-* contrast-* contrast-* contrast-* sidebar-* border-* sidebar-*" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning1725<a class="flex items-* gap-* shrink contrast-* truncate text-* links-* links-* links-* links-* underline-* links-* links-* links-* links-* links-* links-* theme-* hover:theme-*" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning1290<a class="relative z-* text-* w-* py-* px-* transition-* motion-* duration-* rounded-* straight-* circular-* sidebar-* hover:bg-* theme-* hover:text-* contrast-* contrast-* contrast-* sidebar-* border-* sidebar-* subitem sidebar-* opacity-* contrast-* sidebar-* sidebar-* sidebar-*" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning1150<a class="group/headerlogo min-* shrink flex items-*" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning1005<a class="decoration-* underline-* links-* links-* links-* hover:links-* contrast-* contrast-* links-* hover:links-* hover:links-* transition-* duration-* no-* hover:underline text-* tracking-* font-* uppercase flex items-* gap-* contrast-* contrast-*" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning575<a class="flex w-* items-* justify-* overflow-* circular-* rounded-* straight-* px-* py-* text-* text-* theme-* theme-* ring-* transition-* bg-* decoration-* ring-* pr-* hover:bg-*" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning575<button class="button group/button inline-* items-* gap-* rounded-* straight-* circular-* border-* hover:border-* disabled:border-* depth-* hover:depth-* focus-* active:depth-* dark:shadow-* not-* contrast-* contrast-* contrast-* contrast-* contrast-* contrast-* hover:depth-* focus-* data-* active:depth-* transition-* grow-* shrink-* truncate max-* align-* disabled:cursor-* disabled:translate-* disabled:shadow-* text-* border-* contrast-* shadow-* translate-* hover:not-* hover:not-* focus-* focus-* aria-* aria-* contrast-* disabled:text-* disabled:bg-* p-* text-* px-* group/dropdown -* bg-* lg:max-* max-* md:[&_* [&_* ml-* py-* [&_*" id="radix-_R_9cmj6iv5ubsnpfivb_" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning575<button class="group/dropdown text-* hover:text-* dark:hover:text-* theme-* theme-* flex gap-* items-*" id="radix-_R_7l36iv5ubsnpfivb_" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning575<button class="button group/button inline-* items-* gap-* rounded-* straight-* circular-* border-* hover:border-* disabled:border-* depth-* hover:depth-* focus-* active:depth-* dark:shadow-* not-* contrast-* contrast-* contrast-* contrast-* contrast-* contrast-* hover:depth-* focus-* data-* active:depth-* transition-* grow-* shrink-* truncate max-* align-* disabled:cursor-* disabled:translate-* disabled:shadow-* text-* border-* contrast-* shadow-* translate-* hover:not-* hover:not-* focus-* focus-* aria-* aria-* contrast-* disabled:text-* disabled:bg-* p-* text-* px-* group/dropdown -* bg-* lg:max-* max-* md:[&_* flex! site-* hover:site-* focus-* aria-*" id="radix-_R_9l36iv5ubsnpfivb_" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning550<a class="group/toclink toclink relative transition-* flex flex-* justify-* items-* gap-* circular-* rounded-* straight-* p-* pl-* text-* text-* contrast-* before:contents[] before:-* before:absolute before:inset-* sidebar-* [&+div_* [&+div_* [&+div_* font-* sidebar-* before:bg-* text-* sidebar-* [html.sidebar-* [html.sidebar-* [html.sidebar-* [html.sidebar-* hover:bg-* hover:text-* hover:before:bg-* hover:sidebar-* contrast-* contrast-* contrast-* contrast-* contrast-* contrast-*" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning545<a class="group text-* p-* flex gap-* flex-* flex-* items-* pr-* border border-* rounded-* circular-* straight-* hover:border-* text-* md:p-* md:text-*" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning545<a class="group text-* p-* flex gap-* flex-* flex-* items-* pl-* border border-* rounded-* circular-* straight-* hover:border-* text-* md:p-* md:text-*" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning495<a class="relative z-* text-* w-* py-* px-* transition-* motion-* duration-* rounded-* straight-* circular-* sidebar-* contrast-* contrast-* contrast-* sidebar-* border-* sidebar-* text-* hover:text-* contrast-* contrast-* hover:bg-* theme-* [html.sidebar-* theme-* tint:font-* contrast-* sidebar-* sidebar-* sidebar-* [html.theme-* [html.theme-*" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning395<a class="link-* absolute inset-* z-*" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning175<a class="group flex flex-* justify-* items-* gap-* ring-* ring-* rounded-* straight-* circular-* px-* py-* transition-* hover:ring-*" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning160<button class="button group/button inline-* items-* gap-* rounded-* straight-* circular-* border border-* hover:border-* disabled:border-* depth-* hover:depth-* focus-* active:depth-* shadow-* dark:shadow-* not-* contrast-* contrast-* contrast-* contrast-* contrast-* contrast-* hover:depth-* focus-* data-* active:depth-* transition-* grow-* shrink-* truncate max-* align-* disabled:cursor-* disabled:translate-* disabled:shadow-* depth-* hover:bg-* hover:not-* hover:not-* contrast-* disabled:bg-* disabled:text-* p-* rounded-* px-* pointer-* z-* my-* bg-* text-* text-* opacity-* focus:opacity-* group-*" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning135<a class="group flex flex-* justify-* items-* gap-* ring-* ring-* rounded-* straight-* circular-* px-* py-* transition-* hover:ring-* mx-* page-* w-* decoration-* max-* print:break-* page-*" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning75<a class="group flex flex-* justify-* items-* gap-* ring-* ring-* rounded-* straight-* circular-* px-* py-* transition-* hover:ring-* mx-* page-* w-* decoration-* max-* print:break-* page-* page-*" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning40<a class="hover:underline" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning40<button class="button group/button inline-* items-* gap-* rounded-* straight-* circular-* border border-* hover:border-* disabled:border-* depth-* hover:depth-* focus-* active:depth-* shadow-* dark:shadow-* not-* contrast-* contrast-* contrast-* contrast-* contrast-* contrast-* hover:depth-* focus-* data-* active:depth-* transition-* grow-* shrink-* truncate max-* align-* disabled:cursor-* disabled:translate-* disabled:shadow-* bg-* depth-* text-* hover:bg-* hover:not-* hover:not-* contrast-* disabled:bg-* disabled:text-* p-* text-* rounded-* px-*" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning40<a class="button group/button inline-* items-* gap-* rounded-* straight-* circular-* border border-* hover:border-* disabled:border-* depth-* hover:depth-* focus-* active:depth-* shadow-* dark:shadow-* not-* contrast-* contrast-* contrast-* contrast-* contrast-* contrast-* hover:depth-* focus-* data-* active:depth-* transition-* grow-* shrink-* truncate max-* align-* disabled:cursor-* disabled:translate-* disabled:shadow-* bg-* depth-* text-* hover:bg-* hover:not-* hover:not-* contrast-* disabled:bg-* disabled:text-* p-* text-* rounded-* px-*" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning15<a class="underline decoration-* underline-* links-* links-* links-* hover:links-* contrast-* contrast-* links-* hover:links-* hover:links-* transition-* duration-* flex flex-* items-* gap-*" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning5<button class="relative inline-* max-* truncate px-* py-* font-* text-* transition-*" id="tab-fp***" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning4<button class="relative inline-* max-* truncate px-* py-* font-* text-* transition-*" id="tab-cli-lms-load--lms-server-start" *** >URL 1, URL 2, URL 3, URL 4
warning2<button class="relative inline-* max-* truncate px-* py-* font-* text-* transition-*" id="tab-windows-setups" *** >URL 1, URL 2
warning2<button class="relative inline-* max-* truncate px-* py-* font-* text-* transition-*" id="tab-gguf--4-bit" *** >URL 1, URL 2
warning2<button class="relative inline-* max-* truncate px-* py-* font-* text-* transition-*" id="tab-gui-developer-tab" *** >URL 1, URL 2
warning2<button class="relative inline-* max-* truncate px-* py-* font-* text-* transition-*" id="tab-cli-import-lms-import" *** >URL 1, URL 2
warning2<button class="relative inline-* max-* truncate px-* py-* font-* text-* transition-*" id="tab-mac-linux-setups" *** >URL 1, URL 2
warning1<button class="relative inline-* max-* truncate px-* py-* font-* text-* transition-*" id="tab-gui-onglet-developpeur" *** >/docs/fr/notions-de-base/inference-and-deployment/lm-studio
warning1<button class="relative inline-* max-* truncate px-* py-* font-* text-* transition-*" id="tab-logging" *** >/docs/new/studio/start
warning1<button class="relative inline-* max-* truncate px-* py-* font-* text-* transition-*" id="tab-instruction-16-bits" *** >/docs/fr/commencer/unsloth-model-catalog
warning1<button class="relative inline-* max-* truncate px-* py-* font-* text-* transition-*" id="tab-gguf--4-wei" *** >/docs/zh/kai-shi-shi-yong/unsloth-model-catalog
warning1<button class="relative inline-* max-* truncate px-* py-* font-* text-* transition-*" id="tab-rkarullmnosettoappugawattaraclaude-codenosettoappuwoimasuclaude-codehatminarudesuruanthropicnojentok" *** >/docs/jp/ji-ben/claude-code
warning1<button class="relative inline-* max-* truncate px-* py-* font-* text-* transition-*" id="tab-guidevelopertabu" *** >/docs/jp/ji-ben/inference-and-deployment/lm-studio
warning1<button class="relative inline-* max-* truncate px-* py-* font-* text-* transition-*" id="tab-cliinptolms-import" *** >/docs/jp/ji-ben/inference-and-deployment/lm-studio
warning1<button class="relative inline-* max-* truncate px-* py-* font-* text-* transition-*" id="tab-ben-di" *** >/docs/zh/ji-chu-zhi-shi/inference-and-de…ment/saving-to-gguf
warning1<button class="relative inline-* max-* truncate px-* py-* font-* text-* transition-*" id="tab-manuelles-speichern" *** >/docs/de/grundlagen/inference-and-deployment/saving-to-gguf
warning1<button class="relative inline-* max-* truncate px-* py-* font-* text-* transition-*" id="tab-planung" *** >/docs/de/neu/studio/start
warning1<button class="relative inline-* max-* truncate px-* py-* font-* text-* transition-*" id="tab-ri-zhi-ji-lu" *** >/docs/zh/xin-zeng/studio/start
warning1<button class="relative inline-* max-* truncate px-* py-* font-* text-* transition-*" id="tab-manual-saving" *** >/docs/basics/inference-and-deployment/saving-to-gguf
warning1<button class="relative inline-* max-* truncate px-* py-* font-* text-* transition-*" id="tab-zui-shi-hua" *** >/docs/jp/xin-ji-neng/studio/start
warning1<button class="relative inline-* max-* truncate px-* py-* font-* text-* transition-*" id="tab-ji-chu-4-wei-he-16-wei" *** >/docs/zh/kai-shi-shi-yong/unsloth-model-catalog
warning1<button class="relative inline-* max-* truncate px-* py-* font-* text-* transition-*" id="tab-protokollierung" *** >/docs/de/neu/studio/start
warning1<button class="relative inline-* max-* truncate px-* py-* font-* text-* transition-*" id="tab-manueller-import-ordnerstruktur" *** >/docs/de/grundlagen/inference-and-deployment/lm-studio
warning1<button class="relative inline-* max-* truncate px-* py-* font-* text-* transition-*" id="tab-sauvegarde-manuelle" *** >/docs/fr/notions-de-base/inference-and-d…ment/saving-to-gguf
warning1<button class="relative inline-* max-* truncate px-* py-* font-* text-* transition-*" id="tab-lai-zi-hugging-face" *** >/docs/zh/ji-chu-zhi-shi/inference-and-deployment/lm-studio
warning1<button class="relative inline-* max-* truncate px-* py-* font-* text-* transition-*" id="tab-bsu-4-16bitto" *** >/docs/jp/meru/unsloth-model-catalog
warning1<button class="relative inline-* max-* truncate px-* py-* font-* text-* transition-*" id="tab-mac-linux-she-zhi" *** >/docs/zh/ji-chu-zhi-shi/claude-code
warning1<button class="relative inline-* max-* truncate px-* py-* font-* text-* transition-*" id="tab-shou-dong-bao-cun" *** >/docs/zh/ji-chu-zhi-shi/inference-and-de…ment/saving-to-gguf
warning1<button class="relative inline-* max-* truncate px-* py-* font-* text-* transition-*" id="tab-zhi-ling-16-wei" *** >/docs/zh/kai-shi-shi-yong/unsloth-model-catalog
warning1<button class="relative inline-* max-* truncate px-* py-* font-* text-* transition-*" id="tab-hugging-facekara" *** >/docs/jp/ji-ben/inference-and-deployment/lm-studio
warning1<button class="relative inline-* max-* truncate px-* py-* font-* text-* transition-*" id="tab-instrukt-16-bit" *** >/docs/de/los-gehts/unsloth-model-catalog
warning1<button class="relative inline-* max-* truncate px-* py-* font-* text-* transition-*" id="tab-diao-du" *** >/docs/zh/xin-zeng/studio/start
warning1<button class="relative inline-* max-* truncate px-* py-* font-* text-* transition-*" id="tab-manual-import-folder-structure" *** >/docs/basics/inference-and-deployment/lm-studio
warning1<button class="relative inline-* max-* truncate px-* py-* font-* text-* transition-*" id="tab-journalisation" *** >/docs/fr/nouveau/studio/start
warning1<button class="relative inline-* max-* truncate px-* py-* font-* text-* transition-*" id="tab-localement" *** >/docs/fr/notions-de-base/inference-and-d…ment/saving-to-gguf
warning1<button class="relative inline-* max-* truncate px-* py-* font-* text-* transition-*" id="tab-base-4-bits-et-16-bits" *** >/docs/fr/commencer/unsloth-model-catalog
warning1<button class="relative inline-* max-* truncate px-* py-* font-* text-* transition-*" id="tab-settings.json" *** >/docs/fr/notions-de-base/claude-code
warning1<button class="relative inline-* max-* truncate px-* py-* font-* text-* transition-*" id="tab-gguf-4bitto" *** >/docs/jp/meru/unsloth-model-catalog
warning1<button class="relative inline-* max-* truncate px-* py-* font-* text-* transition-*" id="tab-instruct-16bitto" *** >/docs/jp/meru/unsloth-model-catalog
warning1<button class="relative inline-* max-* truncate px-* py-* font-* text-* transition-*" id="tab-shou-dong-dao-ru-wen-jian-jia-jie-gou" *** >/docs/zh/ji-chu-zhi-shi/inference-and-deployment/lm-studio
warning1<button class="relative inline-* max-* truncate px-* py-* font-* text-* transition-*" id="tab-ming-ling-xing-lms-load--lms-server-start" *** >/docs/zh/ji-chu-zhi-shi/inference-and-deployment/lm-studio
warning1<button class="relative inline-* max-* truncate px-* py-* font-* text-* transition-*" id="tab-windows-she-zhi" *** >/docs/zh/ji-chu-zhi-shi/claude-code
warning1<button class="relative inline-* max-* truncate px-* py-* font-* text-* transition-*" id="tab-optimierung" *** >/docs/de/neu/studio/start
warning1<button class="relative inline-* max-* truncate px-* py-* font-* text-* transition-*" id="tab-basis-4-and-16-bit" *** >/docs/de/los-gehts/unsloth-model-catalog
warning1<button class="relative inline-* max-* truncate px-* py-* font-* text-* transition-*" id="tab-deno" *** >/docs/jp/ji-ben/inference-and-deployment/saving-to-gguf
warning1<button class="relative inline-* max-* truncate px-* py-* font-* text-* transition-*" id="tab-you-hua" *** >/docs/zh/xin-zeng/studio/start
warning1<button class="relative inline-* max-* truncate px-* py-* font-* text-* transition-*" id="tab-import-cli-lms-import" *** >/docs/fr/notions-de-base/inference-and-deployment/lm-studio
warning1<button class="relative inline-* max-* truncate px-* py-* font-* text-* transition-*" id="tab-depuis-hugging-face" *** >/docs/fr/notions-de-base/inference-and-deployment/lm-studio
warning1<button class="relative inline-* max-* truncate px-* py-* font-* text-* transition-*" id="tab-from-hugging-face" *** >/docs/basics/inference-and-deployment/lm-studio
warning1<button class="relative inline-* max-* truncate px-* py-* font-* text-* transition-*" id="tab-planification" *** >/docs/fr/nouveau/studio/start
warning1<button class="relative inline-* max-* truncate px-* py-* font-* text-* transition-*" id="tab-gguf--4-bits" *** >/docs/fr/commencer/unsloth-model-catalog
warning1<button class="relative inline-* max-* truncate px-* py-* font-* text-* transition-*" id="tab-importation-manuelle-structure-du-dossier" *** >/docs/fr/notions-de-base/inference-and-deployment/lm-studio
warning1<button class="relative inline-* max-* truncate px-* py-* font-* text-* transition-*" id="tab-lokal" *** >/docs/de/grundlagen/inference-and-deployment/saving-to-gguf
warning1<button class="relative inline-* max-* truncate px-* py-* font-* text-* transition-*" id="tab-instruct-16-bit" *** >/docs/get-started/unsloth-model-catalog
warning1<button class="relative inline-* max-* truncate px-* py-* font-* text-* transition-*" id="tab-inptoforuda" *** >/docs/jp/ji-ben/inference-and-deployment/lm-studio
warning1<button class="relative inline-* max-* truncate px-* py-* font-* text-* transition-*" id="tab-sukejru" *** >/docs/jp/xin-ji-neng/studio/start
warning1<button class="relative inline-* max-* truncate px-* py-* font-* text-* transition-*" id="tab-configurations-mac-linux" *** >/docs/fr/notions-de-base/claude-code
warning1<button class="relative inline-* max-* truncate px-* py-* font-* text-* transition-*" id="tab-optimization" *** >/docs/new/studio/start
warning1<button class="relative inline-* max-* truncate px-* py-* font-* text-* transition-*" id="tab-locally" *** >/docs/basics/inference-and-deployment/saving-to-gguf
warning1<button class="relative inline-* max-* truncate px-* py-* font-* text-* transition-*" id="tab-rkaruni" *** >/docs/jp/ji-ben/inference-and-deployment/saving-to-gguf
warning1<button class="relative inline-* max-* truncate px-* py-* font-* text-* transition-*" id="tab-cli-dao-ru-lms-import" *** >/docs/zh/ji-chu-zhi-shi/inference-and-deployment/lm-studio
warning1<button class="relative inline-* max-* truncate px-* py-* font-* text-* transition-*" id="tab-base-4-and-16-bit" *** >/docs/get-started/unsloth-model-catalog
warning1<button class="relative inline-* max-* truncate px-* py-* font-* text-* transition-*" id="tab-von-hugging-face" *** >/docs/de/grundlagen/inference-and-deployment/lm-studio
warning1<button class="relative inline-* max-* truncate px-* py-* font-* text-* transition-*" id="tab-optimisation" *** >/docs/fr/nouveau/studio/start
warning1<button class="relative inline-* max-* truncate px-* py-* font-* text-* transition-*" id="tab-schedule" *** >/docs/new/studio/start
warning1<button class="relative inline-* max-* truncate px-* py-* font-* text-* transition-*" id="tab-rogu" *** >/docs/jp/xin-ji-neng/studio/start
warning1<button class="relative inline-* max-* truncate px-* py-* font-* text-* transition-*" id="tab-tu-xing-jie-mian-kai-fa-zhe-xuan-xiang-ka" *** >/docs/zh/ji-chu-zhi-shi/inference-and-deployment/lm-studio
warning1<button class="relative inline-* max-* truncate px-* py-* font-* text-* transition-*" id="tab-claudecode.disableloginprompt-true" *** >/docs/jp/ji-ben/claude-code
No rows found, please edit your search term.

Missing roles

SeverityOccursDetailAffected URLs (max 5)
warning1725<nav class="flex flex-* gap-* text-*">URL 1, URL 2, URL 3, URL 4, URL 5
warning575<nav ***>URL 1, URL 2, URL 3, URL 4, URL 5
warning575<header class="flex flex-* h-* sticky top-* pt-* z-* w-* flex-* shadow-* shadow-* bg-* theme-* [html.sidebar-* theme-* theme-* contrast-* text-* backdrop-*" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning575<footer class="border-* border-*" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning575<aside class="group/aside order-* hidden max-* pt-* pb-* opacity-* xl:flex overflow-* xl:max-* xl:opacity-* xl:ml-* xl:max-* xl:max-* xl:max-* xl:max-* hydrated:starting:ml-* hydrated:starting:max-* hydrated:starting:opacity-* transition-* duration-* motion-* transition-* basis-* grow-* shrink-* break-* text-* contrast-* sticky lg:top-* lg:max-* lg:site-* lg:site-* lg:site-* lg:site-* lg:[html[style*=">URL 1, URL 2, URL 3, URL 4, URL 5
warning575<header class="max-* page-* mx-* mb-* space-* page-* page-* page-*">URL 1, URL 2, URL 3, URL 4, URL 5
warning575<main class="relative min-* flex-* max-* py-* break-* @container page-* site-* page-*">URL 1, URL 2, URL 3, URL 4, URL 5
warning575<aside class="side-* fixed inset-* z-* left-* max-* group/table-* text-* grow-* shrink-* w-* md:w-* lg:w-* basis-* lg:page-* max-* max-* border-* lg:flex! embed:lg:page-* lg:animate-* lg:sticky lg:mr-* lg:z-* lg:top-* lg:h-* lg:announcement:h-* lg:site-* lg:site-* lg:announcement:site-* lg:site-* lg:site-* lg:site-* lg:not-*" *** >URL 1, URL 2, URL 3, URL 4, URL 5

Missing html lang attribute

No problems found.

Security

HeaderOKNoticeWarningCriticalRecommendation
X-Frame-Options005750X-Frame-Options header is not set. It prevents clickjacking attacks when set to 'deny' or 'sameorigin.
Feature-Policy005750Feature-Policy header is not set. It allows enabling/disabling browser APIs and features for security. Not important if Permissions-Policy is set.
Permissions-Policy005750Permissions-Policy header is not set. It allows enabling/disabling browser APIs and features for security.
Server057500Server header is set to 'cloudflare'. It is better not to reveal used technologies.
Strict-Transport-Security575000
X-XSS-Protection575000
X-Content-Type-Options575000
Referrer-Policy575000
Content-Security-Policy575000

Security headers

SeverityOccursDetailAffected URLs (max 5)
warning575Feature-Policy header is not set. It allows enabling/disabling browser APIs and features for security. Not important if Permissions-Policy is set.URL 1, URL 2, URL 3, URL 4, URL 5
warning575Permissions-Policy header is not set. It allows enabling/disabling browser APIs and features for security.URL 1, URL 2, URL 3, URL 4, URL 5
warning575X-Frame-Options header is not set. It prevents clickjacking attacks when set to 'deny' or 'sameorigin.URL 1, URL 2, URL 3, URL 4, URL 5
notice575Server header is set to 'cloudflare'. It is better not to reveal used technologies.URL 1, URL 2, URL 3, URL 4, URL 5

TOP non-unique titles

Count 🔽Title
5Google Colab | Unsloth Documentation
5Grok 2 | Unsloth Documentation
4Unsloth Data Recipes | Unsloth Documentation
2GSPO Reinforcement Learning | Unsloth Documentation
2FP8 Reinforcement Learning | Unsloth Documentation
2RL Reward Hacking | Unsloth Documentation
2gpt-oss Reinforcement Learning | Unsloth Documentation
2Quantization-Aware Training (QAT) | Unsloth Documentation

TOP non-unique descriptions

Count 🔽Description
55

SEO metadata

Found 200 row(s).
URL 🔼IndexingTitleH1DescriptionKeywords
/docsAllowedUnsloth Docs | Unsloth Documentation🦥Unsloth DocsUnsloth is an open-source framework for running and training models.
/docs/basics/chat-templatesAllowedChat Templates | Unsloth Documentation💬Chat TemplatesLearn the fundamentals and customization options of chat templates, including Conversational, ChatML, ShareGPT, Alpaca formats, and more!
/docs/basics/claude-codeAllowedHow to Run Local LLMs with Claude Code | Unsloth DocumentationclaudeHow to Run Local LLMs with Claude CodeGuide to use open models with Claude Code on your local device.
/docs/basics/codexAllowedHow to Run Local LLMs with OpenAI Codex | Unsloth DocumentationopenaiHow to Run Local LLMs with OpenAI CodexUse open models with OpenAI Codex on your device locally.
/docs/basics/continued-pretrainingAllowedContinued Pretraining | Unsloth Documentation♻️Continued PretrainingAKA as Continued Finetuning. Unsloth allows you to continually pretrain so a model can learn a new language.
/docs/basics/dgx-stationAllowedFine-Tuning LLMs on NVIDIA DGX Station with Unsloth | Unsloth Documentationmicrochip-aiFine-Tuning LLMs on NVIDIA DGX Station with UnslothNVIDIA DGX Station tutorial on how to fine-tune with notebooks from Unsloth.
/docs/basics/finetuning-from-last-checkpointAllowedFinetuning from Last Checkpoint | Unsloth Documentation🏁Finetuning from Last CheckpointCheckpointing allows you to save your finetuning progress so you can pause it and then continue.
/docs/basics/inference-and-deploymentAllowedInference & Deployment | Unsloth Documentation🖥️Inference & DeploymentLearn how to save your finetuned model so you can run it in your favorite inference engine.
/docs/basics/inference-and-deployment/deploy-llms-phoneAllowedHow to Run and Deploy LLMs on your iOS or Android Phone | Unsloth Documentation📱How to Run and Deploy LLMs on your iOS or Android PhoneTutorial for fine-tuning your own LLM and deploying it on your Android or iPhone with ExecuTorch.
/docs/basics/inference-and-deployment/llama-server-and-openai-endpointAllowedllama-server & OpenAI endpoint Deployment Guide | Unsloth Documentationllama-server & OpenAI endpoint Deployment GuideDeploying via llama-server with an OpenAI compatible endpoint
/docs/basics/inference-and-deployment/lm-studioAllowedDeploying models to LM Studio | Unsloth DocumentationDeploying models to LM StudioSaving models to GGUF so you can run and deploy them to LM Studio
/docs/basics/inference-and-deployment/lm-studio/how-to-install-lm-studio-cli-in-linux-terminalAllowedHow to install LM Studio CLI in Linux Terminal | Unsloth Documentation👾How to install LM Studio CLI in Linux TerminalLM Studio CLI installation guide without a UI in a terminal instance.
/docs/basics/inference-and-deployment/saving-to-ggufAllowedSaving to GGUF | Unsloth DocumentationSaving to GGUFSaving models to 16bit for GGUF so you can use it for Ollama, Jan AI, Open WebUI and more!
/docs/basics/inference-and-deployment/saving-to-gguf/speculative-decodingAllowedSpeculative Decoding | Unsloth DocumentationSpeculative DecodingSpeculative Decoding with llama-server, llama.cpp, vLLM and more for 2x faster inference
/docs/basics/inference-and-deployment/saving-to-ollamaAllowedSaving to Ollama | Unsloth DocumentationSaving to Ollama
/docs/basics/inference-and-deployment/sglang-guideAllowedSGLang Deployment & Inference Guide | Unsloth DocumentationSGLang Deployment & Inference GuideGuide on saving and deploying LLMs to SGLang for serving LLMs in production
/docs/basics/inference-and-deployment/troubleshooting-inferenceAllowedTroubleshooting Inference | Unsloth DocumentationTroubleshooting InferenceIf you're experiencing issues when running or saving your model.
/docs/basics/inference-and-deployment/unsloth-inferenceAllowedUnsloth Inference | Unsloth DocumentationUnsloth InferenceLearn how to run your finetuned model with Unsloth's faster inference.
/docs/basics/inference-and-deployment/vllm-guideAllowedvLLM Deployment & Inference Guide | Unsloth DocumentationvLLM Deployment & Inference GuideGuide on saving and deploying LLMs to vLLM for serving LLMs in production
/docs/basics/inference-and-deployment/vllm-guide/lora-hot-swapping-guideAllowedLoRA Hot Swapping Guide | Unsloth DocumentationLoRA Hot Swapping Guide
/docs/basics/inference-and-deployment/vllm-guide/vllm-engine-argumentsAllowedvLLM Engine Arguments | Unsloth DocumentationvLLM Engine Arguments
/docs/basics/multi-gpu-training-with-unslothAllowedMulti-GPU Fine-tuning with Unsloth | Unsloth Documentationrectangle-historyMulti-GPU Fine-tuning with UnslothLearn how to fine-tune LLMs on multiple GPUs and parallelism with Unsloth.
/docs/basics/multi-gpu-training-with-unsloth/ddpAllowedMulti-GPU Fine-tuning with Distributed Data Parallel (DDP) | Unsloth DocumentationMulti-GPU Fine-tuning with Distributed Data Parallel (DDP)Learn how to use the Unsloth CLI to train on multiple GPUs with Distributed Data Parallel (DDP)!
/docs/basics/text-to-speech-tts-fine-tuningAllowedText-to-Speech (TTS) Fine-tuning Guide | Unsloth Documentation🔊Text-to-Speech (TTS) Fine-tuning GuideLearn how to to fine-tune TTS & STT voice models with Unsloth.
/docs/basics/tool-calling-guide-for-local-llmsAllowedTool Calling Guide for Local LLMs | Unsloth Documentationscrewdriver-wrenchTool Calling Guide for Local LLMs
/docs/basics/troubleshooting-and-faqsAllowedTroubleshooting & FAQs | Unsloth Documentation⚠️Troubleshooting & FAQsTips to solve issues, and frequently asked questions.
/docs/basics/troubleshooting-and-faqs/hugging-face-hub-xet-debuggingAllowedHugging Face Hub, XET debugging | Unsloth DocumentationHugging Face Hub, XET debuggingDebugging, troubleshooting stalled, stuck downloads and slow downloads
/docs/basics/unsloth-benchmarksAllowedUnsloth Benchmarks | Unsloth Documentation📊Unsloth BenchmarksUnsloth recorded benchmarks on NVIDIA GPUs.
/docs/basics/unsloth-dynamic-2.0-ggufs/unsloth-dynamic-ggufs-on-aider-polyglotAllowedUnsloth Dynamic GGUFs on Aider Polyglot | Unsloth Documentation🦥Unsloth Dynamic GGUFs on Aider PolyglotPerformance of Unsloth Dynamic GGUFs on Aider Polyglot Benchmarks
/docs/basics/unsloth-environment-flagsAllowedUnsloth Environment Flags | Unsloth Documentation🛠️Unsloth Environment FlagsAdvanced flags which might be useful if you see breaking finetunes, or you want to turn stuff off.
/docs/basics/vision-fine-tuningAllowedVision Fine-tuning | Unsloth Documentation👁️Vision Fine-tuningLearn how to fine-tune vision/multimodal LLMs with Unsloth
/docs/blog/3x-faster-training-packingAllowed3x Faster LLM Training with Unsloth Kernels + Packing | Unsloth Documentation⚡3x Faster LLM Training with Unsloth Kernels + PackingLearn how Unsloth increases training throughput and eliminates padding waste for fine-tuning.
/docs/blog/500k-context-length-fine-tuningAllowed500K Context Length Fine-tuning | Unsloth Documentationruler-combined500K Context Length Fine-tuningLearn how to enable >500K token context window fine-tuning with Unsloth.
/docs/blog/comfyuiAllowedHow to Run Diffusion Image GGUFs in ComfyUI | Unsloth Documentationarrow-pointerHow to Run Diffusion Image GGUFs in ComfyUIGuide for running Unsloth Diffusion GGUF models in ComfyUI.
/docs/blog/fine-tuning-llms-with-blackwell-rtx-50-series-and-unslothAllowedFine-tuning LLMs with Blackwell, RTX 50 series & Unsloth | Unsloth DocumentationmicrochipFine-tuning LLMs with Blackwell, RTX 50 series & UnslothLearn how to fine-tune LLMs on NVIDIA's Blackwell RTX 50 series and B200 GPUs with our step-by-step guide.
/docs/blog/fine-tuning-llms-with-nvidia-dgx-spark-and-unslothAllowedFine-tuning LLMs with NVIDIA DGX Spark and Unsloth | Unsloth DocumentationsparkleFine-tuning LLMs with NVIDIA DGX Spark and UnslothTutorial on how to fine-tune and do reinforcement learning (RL) with OpenAI gpt-oss on NVIDIA DGX Spark.
/docs/blog/how-to-fine-tune-llms-with-unsloth-and-dockerAllowedHow to Fine-tune LLMs with Unsloth & Docker | Unsloth DocumentationdockerHow to Fine-tune LLMs with Unsloth & DockerLearn how to fine-tune LLMs or do Reinforcement Learning (RL) with Unsloth's Docker image.
/docs/blog/quantization-aware-training-qatAllowedQuantization-Aware Training (QAT) | Unsloth Documentationdown-left-and-up-right-to-centerQuantization-Aware Training (QAT)Quantize models to 4-bit with Unsloth and PyTorch to recover accuracy.
/docs/deAllowedUnsloth-Dokumentation | Unsloth Documentation🦥Unsloth-DokumentationUnsloth ist ein Open-Source-Framework zum Ausführen und Trainieren von Modellen.
/docs/de/blog/3x-faster-training-packingAllowed3x schnelleres LLM-Training mit Unsloth-Kernels + Packing | Unsloth Documentation⚡3x schnelleres LLM-Training mit Unsloth-Kernels + PackingErfahre, wie Unsloth den Trainingsdurchsatz erhöht und Padding-Verschwendung beim Fine-Tuning eliminiert.
/docs/de/blog/500k-context-length-fine-tuningAllowedFine-Tuning mit 500K Kontextlänge | Unsloth Documentationruler-combinedFine-Tuning mit 500K KontextlängeLerne, wie du das Fine-Tuning mit einem Kontextfenster von über 500K Tokens mit Unsloth aktivierst.
/docs/de/blog/comfyuiAllowedWie man Diffusion-Image-GGUFs in ComfyUI ausführt | Unsloth Documentationarrow-pointerWie man Diffusion-Image-GGUFs in ComfyUI ausführtAnleitung zum Ausführen von Unsloth-Diffusion-GGUF-Modellen in ComfyUI.
/docs/de/blog/fine-tuning-llms-with-blackwell-rtx-50-series-and-unslothAllowedFine-Tuning von LLMs mit Blackwell, RTX 50 Serie & Unsloth | Unsloth DocumentationmicrochipFine-Tuning von LLMs mit Blackwell, RTX 50 Serie & UnslothLerne mit unserer Schritt-für-Schritt-Anleitung, wie man LLMs auf NVIDIA Blackwell, der RTX-50-Serie und B200-GPUs feinabstimmt.
/docs/de/blog/fine-tuning-llms-with-nvidia-dgx-spark-and-unslothAllowedFine-Tuning von LLMs mit NVIDIA DGX Spark und Unsloth | Unsloth DocumentationsparkleFine-Tuning von LLMs mit NVIDIA DGX Spark und UnslothTutorial zum Feinabstimmen und für Reinforcement Learning (RL) mit OpenAI gpt-oss auf NVIDIA DGX Spark.
/docs/de/blog/how-to-fine-tune-llms-with-unsloth-and-dockerAllowedWie man LLMs mit Unsloth & Docker feinabstimmt | Unsloth DocumentationdockerWie man LLMs mit Unsloth & Docker feinabstimmtLerne, wie man LLMs feinabstimmt oder Reinforcement Learning (RL) mit dem Docker-Image von Unsloth durchführt.
/docs/de/blog/quantization-aware-training-qatAllowedQuantization-Aware Training (QAT) | Unsloth Documentationdown-left-and-up-right-to-centerQuantization-Aware Training (QAT)Quantisiere Modelle mit Unsloth und PyTorch auf 4-Bit, um die Genauigkeit wiederherzustellen.
/docs/de/grundlagen/chat-templatesAllowedChat-Vorlagen | Unsloth Documentation💬Chat-VorlagenLerne die Grundlagen und Anpassungsoptionen von Chat-Vorlagen kennen, einschließlich Conversational-, ChatML-, ShareGPT-, Alpaca-Formaten und mehr!
/docs/de/grundlagen/claude-codeAllowedWie man lokale LLMs mit Claude Code ausführt | Unsloth DocumentationclaudeWie man lokale LLMs mit Claude Code ausführtAnleitung zur Verwendung offener Modelle mit Claude Code auf deinem lokalen Gerät.
/docs/de/grundlagen/codexAllowedWie man lokale LLMs mit OpenAI Codex ausführt | Unsloth DocumentationopenaiWie man lokale LLMs mit OpenAI Codex ausführtVerwende offene Modelle lokal auf deinem Gerät mit OpenAI Codex.
/docs/de/grundlagen/continued-pretrainingAllowedFortgesetztes Vortraining | Unsloth Documentation♻️Fortgesetztes VortrainingAuch bekannt als fortgesetztes Fine-Tuning. Unsloth ermöglicht kontinuierliches Vortraining, damit ein Modell eine neue Sprache lernen kann.
/docs/de/grundlagen/dgx-stationAllowedFine-Tuning von LLMs auf NVIDIA DGX Station mit Unsloth | Unsloth Documentationmicrochip-aiFine-Tuning von LLMs auf NVIDIA DGX Station mit UnslothTutorial zur NVIDIA DGX Station zum Feinabstimmen mit Notebooks von Unsloth.
/docs/de/grundlagen/finetuning-from-last-checkpointAllowedFine-Tuning ab dem letzten Checkpoint | Unsloth Documentation🏁Fine-Tuning ab dem letzten CheckpointCheckpointing ermöglicht es dir, deinen Fine-Tuning-Fortschritt zu speichern, damit du pausieren und später fortsetzen kannst.
/docs/de/grundlagen/inference-and-deploymentAllowedInferenz & Bereitstellung | Unsloth Documentation🖥️Inferenz & BereitstellungLerne, wie du dein feinabgestimmtes Modell speicherst, damit du es in deiner bevorzugten Inferenz-Engine ausführen kannst.
/docs/de/grundlagen/inference-and-deployment/deploy-llms-phoneAllowedWie man LLMs auf deinem iOS- oder Android-Smartphone ausführt und bereitstellt | Unsloth Documentation📱Wie man LLMs auf deinem iOS- oder Android-Smartphone ausführt und bereitstelltTutorial zum Feinabstimmen deines eigenen LLM und zur Bereitstellung auf deinem Android- oder iPhone mit ExecuTorch.
/docs/de/grundlagen/inference-and-deployment/llama-server-and-openai-endpointAllowedLeitfaden zur Bereitstellung von llama-server & OpenAI-Endpunkt | Unsloth DocumentationLeitfaden zur Bereitstellung von llama-server & OpenAI-EndpunktBereitstellung über llama-server mit einem OpenAI-kompatiblen Endpunkt
/docs/de/grundlagen/inference-and-deployment/lm-studioAllowedModelle in LM Studio bereitstellen | Unsloth DocumentationModelle in LM Studio bereitstellenSpeichern von Modellen als GGUF, damit du sie in LM Studio ausführen und bereitstellen kannst
/docs/de/grundlagen/inference-and-deployment/lm-studio/how-to-insta…-lm-studio-cli-in-linux-terminalAllowedSo installierst du die LM Studio CLI im Linux-Terminal | Unsloth Documentation👾So installierst du die LM Studio CLI im Linux-TerminalInstallationsanleitung für LM Studio CLI ohne UI in einer Terminal-Instanz.
/docs/de/grundlagen/inference-and-deployment/saving-to-ggufAllowedSpeichern als GGUF | Unsloth DocumentationSpeichern als GGUFSpeichern von Modellen in 16-Bit für GGUF, damit du sie für Ollama, Jan AI, Open WebUI und mehr verwenden kannst!
/docs/de/grundlagen/inference-and-deployment/saving-to-gguf/speculative-decodingAllowedSpekulatives Decoding | Unsloth DocumentationSpekulatives DecodingSpekulatives Decoding mit llama-server, llama.cpp, vLLM und mehr für 2x schnellere Inferenz
/docs/de/grundlagen/inference-and-deployment/saving-to-ollamaAllowedSpeichern für Ollama | Unsloth DocumentationSpeichern für Ollama
/docs/de/grundlagen/inference-and-deployment/sglang-guideAllowedLeitfaden für SGLang-Bereitstellung & Inferenz | Unsloth DocumentationLeitfaden für SGLang-Bereitstellung & InferenzAnleitung zum Speichern und Bereitstellen von LLMs für SGLang zum Einsatz von LLMs in der Produktion
/docs/de/grundlagen/inference-and-deployment/troubleshooting-inferenceAllowedFehlerbehebung bei der Inferenz | Unsloth DocumentationFehlerbehebung bei der InferenzWenn du Probleme beim Ausführen oder Speichern deines Modells hast.
/docs/de/grundlagen/inference-and-deployment/unsloth-inferenceAllowedUnsloth-Inferenz | Unsloth DocumentationUnsloth-InferenzLerne, wie du dein feinabgestimmtes Modell mit der schnelleren Inferenz von Unsloth ausführst.
/docs/de/grundlagen/inference-and-deployment/vllm-guideAllowedLeitfaden für vLLM-Bereitstellung & Inferenz | Unsloth DocumentationLeitfaden für vLLM-Bereitstellung & InferenzAnleitung zum Speichern und Bereitstellen von LLMs für vLLM zum Einsatz von LLMs in der Produktion
/docs/de/grundlagen/inference-and-deployment/vllm-guide/lora-hot-swapping-guideAllowedLeitfaden zum Hot-Swapping von LoRA | Unsloth DocumentationLeitfaden zum Hot-Swapping von LoRA
/docs/de/grundlagen/inference-and-deployment/vllm-guide/vllm-engine-argumentsAllowedvLLM-Engine-Argumente | Unsloth DocumentationvLLM-Engine-Argumente
/docs/de/grundlagen/multi-gpu-training-with-unslothAllowedMulti-GPU-Fine-Tuning mit Unsloth | Unsloth Documentationrectangle-historyMulti-GPU-Fine-Tuning mit UnslothLerne, wie man LLMs auf mehreren GPUs und mit Parallelisierung mit Unsloth feinabstimmt.
/docs/de/grundlagen/multi-gpu-training-with-unsloth/ddpAllowedMulti-GPU-Fine-Tuning mit Distributed Data Parallel (DDP) | Unsloth DocumentationMulti-GPU-Fine-Tuning mit Distributed Data Parallel (DDP)Lerne, wie du die Unsloth-CLI nutzt, um mit Distributed Data Parallel (DDP) auf mehreren GPUs zu trainieren!
/docs/de/grundlagen/text-to-speech-tts-fine-tuningAllowedLeitfaden zum Fine-Tuning von Text-to-Speech (TTS) | Unsloth Documentation🔊Leitfaden zum Fine-Tuning von Text-to-Speech (TTS)Lerne, wie du TTS- und STT-Sprachmodelle mit Unsloth feinabstimmst.
/docs/de/grundlagen/tool-calling-guide-for-local-llmsAllowedLeitfaden zum Tool Calling für lokale LLMs | Unsloth Documentationscrewdriver-wrenchLeitfaden zum Tool Calling für lokale LLMs
/docs/de/grundlagen/troubleshooting-and-faqsAllowedFehlerbehebung & FAQs | Unsloth Documentation⚠️Fehlerbehebung & FAQsTipps zur Lösung von Problemen und häufig gestellte Fragen.
/docs/de/grundlagen/troubleshooting-and-faqs/hugging-face-hub-xet-debuggingAllowedHugging Face Hub, XET-Debugging | Unsloth DocumentationHugging Face Hub, XET-DebuggingDebugging und Fehlerbehebung bei hängenden, feststeckenden und langsamen Downloads
/docs/de/grundlagen/unsloth-benchmarksAllowedUnsloth-Benchmarks | Unsloth Documentation📊Unsloth-BenchmarksVon Unsloth aufgezeichnete Benchmarks auf NVIDIA-GPUs.
/docs/de/grundlagen/unsloth-dynamic-2.0-ggufs/unsloth-dynamic-ggufs-on-aider-polyglotAllowedUnsloth Dynamic GGUFs auf Aider Polyglot | Unsloth Documentation🦥Unsloth Dynamic GGUFs auf Aider PolyglotLeistung der Unsloth Dynamic GGUFs in Aider-Polyglot-Benchmarks
/docs/de/grundlagen/unsloth-environment-flagsAllowedUnsloth-Umgebungsflags | Unsloth Documentation🛠️Unsloth-UmgebungsflagsErweiterte Flags, die nützlich sein könnten, wenn Fine-Tunes fehlschlagen oder du etwas deaktivieren möchtest.
/docs/de/grundlagen/vision-fine-tuningAllowedVision-Fine-Tuning | Unsloth Documentation👁️Vision-Fine-TuningLerne, wie du Vision-/Multimodal-LLMs mit Unsloth feinabstimmst
/docs/de/los-gehts/fine-tuning-for-beginnersAllowedFine-Tuning für Anfänger | Unsloth Documentation⭐Fine-Tuning für Anfänger
/docs/de/los-gehts/fine-tuning-for-beginners/faq-+-is-fine-tuning-right-for-meAllowedFAQ + Ist Fine-Tuning das Richtige für mich? | Unsloth Documentation🤔FAQ + Ist Fine-Tuning das Richtige für mich?Wenn du unsicher bist, ob Fine-Tuning das Richtige für dich ist, schau hier nach! Erfahre mehr über Missverständnisse beim Fine-Tuning, wie es sich im Vergleich zu RAG verhält und mehr:
/docs/de/los-gehts/fine-tuning-for-beginners/unsloth-requirementsAllowedUnsloth-Anforderungen | Unsloth Documentation🛠️Unsloth-AnforderungenHier sind die Anforderungen von Unsloth, einschließlich System- und GPU-VRAM-Anforderungen.
/docs/de/los-gehts/fine-tuning-llms-guideAllowedLeitfaden zum Fine-Tuning von LLMs | Unsloth Documentation🧬Leitfaden zum Fine-Tuning von LLMsLerne alle Grundlagen und Best Practices des Fine-Tunings. Für Anfänger geeignet.
/docs/de/los-gehts/fine-tuning-llms-guide/datasets-guideAllowedDatensatz-Leitfaden | Unsloth Documentation📈Datensatz-LeitfadenLerne, wie man einen Datensatz für das Fine-Tuning erstellt und vorbereitet.
/docs/de/los-gehts/fine-tuning-llms-guide/lora-hyperparameters-guideAllowedLeitfaden zu Hyperparametern für LoRA-Fine-Tuning | Unsloth Documentation🧠Leitfaden zu Hyperparametern für LoRA-Fine-TuningLerne Schritt für Schritt die besten Einstellungen für das Fine-Tuning von LLMs - LoRA-Rang & Alpha, Epochen, Batchgröße + Gradient Accumulation, QLoRA vs. LoRA, Zielmodule und mehr.
/docs/de/los-gehts/fine-tuning-llms-guide/tutorial-how-to-finetune-llama-3-and-use-in-ollamaAllowedTutorial: Wie man Llama-3 feinabstimmt und in Ollama verwendet | Unsloth Documentation🦙Tutorial: Wie man Llama-3 feinabstimmt und in Ollama verwendetEinsteigerleitfaden zur Erstellung eines personalisierten Assistenten (wie ChatGPT), der lokal auf Ollama läuft
/docs/de/los-gehts/fine-tuning-llms-guide/what-model-should-i-useAllowedWelches Modell sollte ich für Fine-Tuning verwenden? | Unsloth Documentation❓Welches Modell sollte ich für Fine-Tuning verwenden?
/docs/de/los-gehts/installAllowedUnsloth-Installation | Unsloth Documentation📥Unsloth-InstallationLerne, Unsloth lokal oder online zu installieren.
/docs/de/los-gehts/install/amdAllowedAnleitung zum Fine-Tuning von LLMs auf AMD-GPUs mit Unsloth | Unsloth Documentationsquare-up-rightAnleitung zum Fine-Tuning von LLMs auf AMD-GPUs mit UnslothErfahre, wie du große Sprachmodelle (LLMs) auf AMD-GPUs mit Unsloth feinabstimmst.
/docs/de/los-gehts/install/conda-installAllowedConda-Installation | Unsloth DocumentationsnakeConda-InstallationUm Unsloth lokal mit Conda zu installieren, befolge die folgenden Schritte:
/docs/de/los-gehts/install/dockerAllowedUnsloth via Docker installieren | Unsloth DocumentationdockerUnsloth via Docker installierenUnsloth mit unserem offiziellen Docker-Container installieren
/docs/de/los-gehts/install/google-colabAllowedGoogle Colab | Unsloth DocumentationgoogleGoogle ColabUm Unsloth auf Google Colab zu installieren und auszuführen, befolge die folgenden Schritte:
/docs/de/los-gehts/install/intelAllowedFine-Tuning von LLMs auf Intel-GPUs mit Unsloth | Unsloth DocumentationinfoFine-Tuning von LLMs auf Intel-GPUs mit UnslothErfahre, wie man große Sprachmodelle auf Intel-GPUs trainiert und feinabstimmt.
/docs/de/los-gehts/install/macAllowedUnsloth auf MacOS installieren | Unsloth DocumentationappleUnsloth auf MacOS installieren
/docs/de/los-gehts/install/pip-installAllowedUnsloth via pip und uv installieren | Unsloth Documentationdesktop-arrow-downUnsloth via pip und uv installierenUm Unsloth lokal über Pip zu installieren, befolge die folgenden Schritte:
/docs/de/los-gehts/install/updatingAllowedUnsloth aktualisieren | Unsloth Documentationarrow-rotate-rightUnsloth aktualisierenUm eine alte Version von Unsloth zu aktualisieren oder zu verwenden, befolge die folgenden Schritte:
/docs/de/los-gehts/install/vs-codeAllowedWie man LLMs in VS Code mit Unsloth & Colab-GPUs feinabstimmt | Unsloth DocumentationvscodeWie man LLMs in VS Code mit Unsloth & Colab-GPUs feinabstimmtAnleitung zum direkten Fine-Tuning von Modellen in Visual Studio Code über Unsloth und Google Colab.
/docs/de/los-gehts/install/windows-installationAllowedWie man LLMs unter Windows mit Unsloth feinabstimmt (Schritt-für-Schritt-Anleitung) | Unsloth DocumentationwindowsWie man LLMs unter Windows mit Unsloth feinabstimmt (Schritt-für-Schritt-Anleitung)Sieh dir an, wie man Unsloth unter Windows installiert, um mit dem lokalen Fine-Tuning von LLMs zu beginnen.
/docs/de/los-gehts/reinforcement-learning-rl-guideAllowedLeitfaden zu Reinforcement Learning (RL) | Unsloth Documentation💡Leitfaden zu Reinforcement Learning (RL)Erfahre alles über Reinforcement Learning (RL) und wie du mit Unsloth mithilfe von GRPO dein eigenes DeepSeek-R1-Reasoning-Modell trainierst. Ein vollständiger Leitfaden von Anfänger bis Fortgeschrittene.
/docs/de/los-gehts/reinforcement-learning-rl-guide/advanced-rl-documentationAllowedErweiterte Dokumentation zu Reinforcement Learning | Unsloth Documentation🧩Erweiterte Dokumentation zu Reinforcement LearningErweiterte Dokumentationseinstellungen bei der Verwendung von Unsloth mit GRPO.
/docs/de/los-gehts/reinforcement-learning-rl-guide/advanced-rl-documentation/fp16-vs-bf16-for-rlAllowedFP16 vs. BF16 für RL | Unsloth Documentation⁉️FP16 vs. BF16 für RLDefeating the Training-Inference Mismatch via FP16 https://arxiv.org/pdf/2510.26788 zeigt, dass die Verwendung von float16 besser ist als bfloat16
/docs/de/los-gehts/reinforcement-learning-rl-guide/advanced-rl-docu…tion/gspo-reinforcement-learningAllowedGSPO Reinforcement Learning | Unsloth Documentationlightbulb-onGSPO Reinforcement LearningTrainiere mit GSPO (Group Sequence Policy Optimization) RL in Unsloth.
/docs/de/los-gehts/reinforcement-learning-rl-guide/advanced-rl-documentation/rl-reward-hackingAllowedRL Reward Hacking | Unsloth Documentationtreasure-chestRL Reward HackingErfahre, was Reward Hacking im Reinforcement Learning ist und wie man ihm entgegenwirkt.
/docs/de/los-gehts/reinforcement-learning-rl-guide/fp8-reinforcement-learningAllowedFP8 Reinforcement Learning | Unsloth Documentation🎱FP8 Reinforcement LearningTrainiere Reinforcement Learning (RL) und GRPO mit FP8-Präzision mit Unsloth.
/docs/de/los-gehts/reinforcement-learning-rl-guide/grpo-long-contextAllowedReinforcement Learning GRPO mit 7x längerem Kontext | Unsloth Documentation🌀Reinforcement Learning GRPO mit 7x längerem KontextErfahre, wie Unsloth ultra-langes Kontext-Fine-Tuning für RL ermöglicht.
/docs/de/los-gehts/reinforcement-learning-rl-guide/memory-efficient-rlAllowedSpeichereffizientes RL | Unsloth DocumentationmemorySpeichereffizientes RL
/docs/de/los-gehts/reinforcement-learning-rl-guide/preference-dpo-orpo-and-ktoAllowedTraining zur Präferenzoptimierung - DPO, ORPO & KTO | Unsloth Documentation🏆Training zur Präferenzoptimierung - DPO, ORPO & KTOErfahre mehr über Fine-Tuning zur Präferenzanpassung mit DPO, GRPO, ORPO oder KTO über Unsloth, befolge die folgenden Schritte:
/docs/de/los-gehts/reinforcement-learning-rl-guide/tutorial-train-your-own-reasoning-model-with-grpoAllowedTutorial: Trainiere dein eigenes Reasoning-Modell mit GRPO | Unsloth Documentation⚡Tutorial: Trainiere dein eigenes Reasoning-Modell mit GRPOEinsteigerleitfaden zur Umwandlung eines Modells wie Llama 3.1 (8B) in ein Reasoning-Modell mithilfe von Unsloth und GRPO.
/docs/de/los-gehts/reinforcement-learning-rl-guide/vision-reinforcement-learning-vlm-rlAllowedVision-Reinforcement Learning (VLM RL) | Unsloth Documentation👁️‍🗨️Vision-Reinforcement Learning (VLM RL)Trainiere Vision-/Multimodalmodelle mit GRPO und RL mit Unsloth!
/docs/de/los-gehts/unsloth-model-catalogAllowedUnsloth-Modellkatalog | Unsloth Documentation🔮Unsloth-Modellkatalog
/docs/de/los-gehts/unsloth-notebooksAllowedUnsloth-Notebooks | Unsloth Documentation📒Unsloth-NotebooksFine-Tuning-Notebooks: Entdecke den Unsloth-Katalog.
/docs/de/modelle/glm-5AllowedGLM-5: Anleitung zum lokalen Ausführen | Unsloth DocumentationzGLM-5: Anleitung zum lokalen AusführenFühre das neue GLM-5-Modell von Z.ai auf deinem eigenen lokalen Gerät aus!
/docs/de/modelle/gpt-oss-how-to-run-and-fine-tuneAllowedgpt-oss: Ausführungsanleitung | Unsloth Documentationopenaigpt-oss: AusführungsanleitungFühre die neuen Open-Source-Modelle von OpenAI aus und feinabstimme sie!
/docs/de/modelle/gpt-oss-how-to-run-and-fine-tune/gpt-oss-reinforcement-learningAllowedgpt-oss Reinforcement Learning | Unsloth Documentationopenaigpt-oss Reinforcement Learning
/docs/de/modelle/gpt-oss-how-to-run-and-fine-tune/gpt-oss-reinforce…ial-how-to-train-gpt-oss-with-rlAllowedTutorial: Wie man gpt-oss mit RL trainiert | Unsloth Documentationbook-open-readerTutorial: Wie man gpt-oss mit RL trainiertLerne, OpenAI gpt-oss mit GRPO zu trainieren, um 2048 autonom lokal oder auf Colab zu schlagen.
/docs/de/modelle/gpt-oss-how-to-run-and-fine-tune/long-context-gpt-oss-trainingAllowedLanges Kontext-Training für gpt-oss | Unsloth DocumentationopenaiLanges Kontext-Training für gpt-oss
/docs/de/modelle/gpt-oss-how-to-run-and-fine-tune/tutorial-how-to-fine-tune-gpt-ossAllowedTutorial: Wie man gpt-oss feinabstimmt | Unsloth DocumentationopenaiTutorial: Wie man gpt-oss feinabstimmtLerne Schritt für Schritt, wie man OpenAI gpt-oss lokal mit Unsloth trainiert.
/docs/de/modelle/minimax-m25AllowedMiniMax-M2.5: Anleitung zum Ausführen | Unsloth DocumentationwaveformMiniMax-M2.5: Anleitung zum AusführenFühre MiniMax-M2.5 lokal auf deinem eigenen Gerät aus!
/docs/de/modelle/nemotron-3AllowedNVIDIA Nemotron 3 Nano - Anleitung zum Ausführen | Unsloth Documentation🧩NVIDIA Nemotron 3 Nano - Anleitung zum AusführenFühre NVIDIA Nemotron 3 Nano lokal auf deinem Gerät aus und feinabstimme es!
/docs/de/modelle/nemotron-3/nemotron-3-superAllowedNVIDIA Nemotron-3-Super: Anleitung zum Ausführen | Unsloth Documentation🧩NVIDIA Nemotron-3-Super: Anleitung zum AusführenFühre NVIDIA Nemotron-3-Super-120B-A12B lokal auf deinem Gerät aus und feinabstimme es!
/docs/de/modelle/qwen3-coder-nextAllowedQwen3-Coder-Next: So führst du es lokal aus | Unsloth Documentation🌠Qwen3-Coder-Next: So führst du es lokal ausAnleitung zum lokalen Ausführen von Qwen3-Coder-Next auf deinem Gerät!
/docs/de/modelle/qwen3-how-to-run-and-fine-tuneAllowedQwen3 - Wie man es ausführt & feinabstimmt | Unsloth Documentation🌠Qwen3 - Wie man es ausführt & feinabstimmtLerne, Qwen3 lokal mit Unsloth + unseren Dynamic 2.0 Quants auszuführen und feinabzustimmen
/docs/de/modelle/qwen3-how-to-run-and-fine-tune/qwen3-2507AllowedQwen3-2507: Anleitung zum lokalen Ausführen | Unsloth Documentation🌠Qwen3-2507: Anleitung zum lokalen AusführenFühre die Thinking- und Instruct-Versionen von Qwen3-30B-A3B-2507 und 235B-A22B lokal auf deinem Gerät aus!
/docs/de/modelle/qwen3-how-to-run-and-fine-tune/qwen3-vl-how-to-run-and-fine-tuneAllowedQwen3-VL: Anleitung zum Ausführen | Unsloth Documentation🌠Qwen3-VL: Anleitung zum AusführenLerne, Qwen3-VL lokal mit Unsloth feinabzustimmen und auszuführen.
/docs/de/modelle/qwen3.5/fine-tuneAllowedQwen3.5 Fine-Tuning-Leitfaden | Unsloth Documentationflask-gearQwen3.5 Fine-Tuning-LeitfadenErfahre, wie du Qwen3.5-LLMs mit Unsloth feinabstimmst.
/docs/de/modelle/qwen3.5/gguf-benchmarksAllowedQwen3.5 GGUF-Benchmarks | Unsloth Documentationchart-fftQwen3.5 GGUF-BenchmarksSieh dir an, wie Unsloth Dynamic GGUFs abschneiden + Analyse von Perplexity, KL-Divergenz und MXFP4.
/docs/de/modelle/tutorialsAllowedTutorials zu Large Language Models (LLMs) | Unsloth Documentation🚀Tutorials zu Large Language Models (LLMs)Entdecke die neuesten LLMs und erfahre, wie du Modelle lokal für optimale Leistung mit Unsloth ausführen und feinabstimmen kannst.
/docs/de/modelle/tutorials/deepseek-ocr-2AllowedDeepSeek-OCR 2: Anleitung zum Ausführen & Feinabstimmen | Unsloth Documentation🐳DeepSeek-OCR 2: Anleitung zum Ausführen & FeinabstimmenAnleitung zum lokalen Ausführen und Feinabstimmen von DeepSeek-OCR-2.
/docs/de/modelle/tutorials/deepseek-ocr-how-to-run-and-fine-tuneAllowedDeepSeek-OCR: So führst du es aus & feinabstimmst | Unsloth Documentation🐳DeepSeek-OCR: So führst du es aus & feinabstimmstAnleitung zum lokalen Ausführen und Feinabstimmen von DeepSeek-OCR.
/docs/de/modelle/tutorials/deepseek-r1-0528-how-to-run-locallyAllowedDeepSeek-R1-0528: So führst du es lokal aus | Unsloth Documentation🐋DeepSeek-R1-0528: So führst du es lokal ausEine Anleitung, wie du DeepSeek-R1-0528 einschließlich Qwen3 auf deinem eigenen lokalen Gerät ausführst!
/docs/de/modelle/tutorials/deepseek-r1-how-to-run-locallyAllowedDeepSeek-R1: So führst du es lokal aus | Unsloth Documentation🐋DeepSeek-R1: So führst du es lokal ausEine Anleitung, wie du unsere 1,58-Bit Dynamic Quants für DeepSeek-R1 mit llama.cpp ausführen kannst.
/docs/de/modelle/tutorials/deepseek-v3-0324-how-to-run-locallyAllowedDeepSeek-V3-0324: So führst du es lokal aus | Unsloth Documentation🐳DeepSeek-V3-0324: So führst du es lokal ausWie man DeepSeek-V3-0324 lokal mit unseren Dynamic Quants ausführt, die die Genauigkeit wiederherstellen
/docs/de/modelle/tutorials/devstral-2AllowedDevstral 2 - Anleitung zum Ausführen | Unsloth Documentation📙Devstral 2 - Anleitung zum AusführenAnleitung zum lokalen Ausführen von Mistral-Devstral-2-Modellen: 123B-Instruct-2512 und Small-2-24B-Instruct-2512.
/docs/de/modelle/tutorials/devstral-how-to-run-and-fine-tuneAllowedDevstral: So führst du es aus & feinabstimmst | Unsloth Documentation📙Devstral: So führst du es aus & feinabstimmstFühre Mistral Devstral 1.1 aus und feinabstimme es, einschließlich Small-2507 und 2505.
/docs/de/modelle/tutorials/functiongemmaAllowedFunctionGemma: So führst du es aus & feinabstimmst | Unsloth DocumentationgoogleFunctionGemma: So führst du es aus & feinabstimmstLerne, FunctionGemma lokal auf deinem Gerät und Smartphone auszuführen und feinabzustimmen.
/docs/de/modelle/tutorials/gemma-3-how-to-run-and-fine-tuneAllowedGemma 3 - Anleitung zum Ausführen | Unsloth DocumentationgoogleGemma 3 - Anleitung zum AusführenWie man Gemma 3 effektiv mit unseren GGUFs auf llama.cpp, Ollama, Open WebUI ausführt und mit Unsloth feinabstimmt!
/docs/de/modelle/tutorials/gemma-3-how-to-run-and-fine-tune/gemma-3n-how-to-run-and-fine-tuneAllowedGemma 3n: So führst du es aus & feinabstimmst | Unsloth DocumentationgoogleGemma 3n: So führst du es aus & feinabstimmstFühre Googles neues Gemma 3n lokal mit Dynamic GGUFs auf llama.cpp, Ollama, Open WebUI aus und feinabstimme es mit Unsloth!
/docs/de/modelle/tutorials/grok-2AllowedGrok 2 | Unsloth Documentationsquare-x-twitterGrok 2Führe xAIs Grok-2-Modell lokal aus!
/docs/de/modelle/tutorials/how-to-run-llms-with-dockerAllowedWie man lokale LLMs mit Docker ausführt: Schritt-für-Schritt-Anleitung | Unsloth DocumentationdockerWie man lokale LLMs mit Docker ausführt: Schritt-für-Schritt-AnleitungLerne, wie man Large Language Models (LLMs) mit Docker & Unsloth auf deinem lokalen Gerät ausführt.
/docs/de/modelle/tutorials/kimi-k2-thinking-how-to-run-locallyAllowedKimi K2 Thinking: Anleitung zum lokalen Ausführen | Unsloth Documentation🌙Kimi K2 Thinking: Anleitung zum lokalen AusführenAnleitung zum Ausführen von Kimi-K2-Thinking und Kimi-K2 auf deinem eigenen lokalen Gerät!
/docs/de/modelle/tutorials/llama-4-how-to-run-and-fine-tuneAllowedLlama 4: So führst du es aus & feinabstimmst | Unsloth Documentation🦙Llama 4: So führst du es aus & feinabstimmstWie man Llama 4 lokal mit unseren Dynamic GGUFs ausführt, die im Vergleich zur Standardquantisierung die Genauigkeit wiederherstellen.
/docs/de/modelle/tutorials/magistral-how-to-run-and-fine-tuneAllowedMagistral: So führst du es aus & feinabstimmst | Unsloth Documentation💥Magistral: So führst du es aus & feinabstimmstLerne Magistral kennen - die neuen Reasoning-Modelle von Mistral.
/docs/de/modelle/tutorials/ministral-3AllowedMinistral 3 - Anleitung zum Ausführen | Unsloth Documentation🐱Ministral 3 - Anleitung zum AusführenAnleitung für Mistral-Modelle der Reihe Ministral 3, zum lokalen Ausführen oder Feinabstimmen auf deinem Gerät
/docs/de/modelle/tutorials/phi-4-reasoning-how-to-run-and-fine-tuneAllowedPhi-4 Reasoning: So führst du es aus & feinabstimmst | Unsloth DocumentationwindowsPhi-4 Reasoning: So führst du es aus & feinabstimmstLerne, Phi-4-Reasoning-Modelle lokal mit Unsloth + unseren Dynamic 2.0 Quants auszuführen und feinabzustimmen
/docs/de/modelle/tutorials/qwen-image-2512AllowedWie man Qwen-Image-2512 lokal in ComfyUI ausführt | Unsloth Documentation💟Wie man Qwen-Image-2512 lokal in ComfyUI ausführtSchritt-für-Schritt-Tutorial zum Ausführen von Qwen-Image-2512 auf deinem lokalen Gerät mit ComfyUI.
/docs/de/modelle/tutorials/qwen3-coder-how-to-run-locallyAllowedQwen3-Coder: So führst du es lokal aus | Unsloth Documentation🌠Qwen3-Coder: So führst du es lokal ausFühre Qwen3-Coder-30B-A3B-Instruct und 480B-A35B lokal mit den Dynamic Quants von Unsloth aus.
/docs/de/modelle/tutorials/qwen3-nextAllowedQwen3-Next: Anleitung zum lokalen Ausführen | Unsloth Documentation🌠Qwen3-Next: Anleitung zum lokalen AusführenFühre die Versionen Qwen3-Next-80B-A3B-Instruct und Thinking lokal auf deinem Gerät aus!
/docs/de/modelle/tutorials/qwq-32b-how-to-run-effectivelyAllowedQwQ-32B: Wie man es effektiv ausführt | Unsloth Documentation🌠QwQ-32B: Wie man es effektiv ausführtWie man QwQ-32B mit unseren Fehlerbehebungen und ohne endlose Generierungen + GGUFs effektiv ausführt.
/docs/de/neu/embedding-finetuningAllowedLeitfaden zum Fine-Tuning von Embedding-Modellen mit Unsloth | Unsloth Documentation🔎Leitfaden zum Fine-Tuning von Embedding-Modellen mit UnslothErfahre, wie du Embedding-Modelle ganz einfach mit Unsloth feinabstimmen kannst.
/docs/de/neu/faster-moeAllowedMoE-Modelle mit Unsloth 12x schneller feinabstimmen | Unsloth Documentation💎MoE-Modelle mit Unsloth 12x schneller feinabstimmenTrainiere MoE-LLMs lokal mit dem Unsloth-Leitfaden.
/docs/de/neu/studioAllowedVorstellung von Unsloth Studio | Unsloth Documentation🦥Vorstellung von Unsloth StudioFühre und trainiere KI-Modelle lokal mit Unsloth Studio.
/docs/de/neu/studio/chatAllowedWie man Modelle mit Unsloth Studio ausführt | Unsloth Documentationcomment-dotsWie man Modelle mit Unsloth Studio ausführtFühre KI-Modelle, LLMs und GGUFs lokal mit Unsloth Studio aus.
/docs/de/neu/studio/data-recipeAllowedUnsloth Data Recipes | Unsloth Documentationhat-chefUnsloth Data RecipesLerne, wie man Datensätze mit den Data Recipes von Unsloth Studio erstellt, aufbaut und bearbeitet.
/docs/de/neu/studio/exportAllowedModelle mit Unsloth Studio exportieren | Unsloth Documentationbox-isometricModelle mit Unsloth Studio exportierenErfahre, wie du deine Safetensor- oder LoRA-Modelldateien in GGUF oder andere Formate exportierst.
/docs/de/neu/studio/installAllowedInstallation von Unsloth Studio | Unsloth Documentationarrow-down-to-squareInstallation von Unsloth StudioErfahre, wie du Unsloth Studio auf deinem lokalen Gerät installierst.
/docs/de/neu/studio/startAllowedErste Schritte mit Unsloth Studio | Unsloth DocumentationboltErste Schritte mit Unsloth StudioEin Leitfaden für den Einstieg in das Fine-Tuning-Studio, Datenrezepte, den Modellexport und den Chat.
/docs/frAllowedDocumentation Unsloth | Unsloth Documentation🦥Documentation UnslothUnsloth est un framework open source pour exécuter et entraîner des modèles.
/docs/fr/blog/3x-faster-training-packingAllowedEntraînement LLM 3x plus rapide avec les kernels Unsloth + packing | Unsloth Documentation⚡Entraînement LLM 3x plus rapide avec les kernels Unsloth + packingApprenez comment Unsloth augmente le débit d'entraînement et élimine le gaspillage de padding pour le fine-tuning.
/docs/fr/blog/500k-context-length-fine-tuningAllowedFine-tuning avec longueur de contexte de 500K | Unsloth Documentationruler-combinedFine-tuning avec longueur de contexte de 500KApprenez à activer le fine-tuning avec une fenêtre de contexte de plus de 500K tokens avec Unsloth.
/docs/fr/blog/comfyuiAllowedComment exécuter des GGUF d'images de diffusion dans ComfyUI | Unsloth Documentationarrow-pointerComment exécuter des GGUF d'images de diffusion dans ComfyUIGuide pour exécuter les modèles Unsloth Diffusion GGUF dans ComfyUI.
/docs/fr/blog/fine-tuning-llms-with-blackwell-rtx-50-series-and-unslothAllowedFine-tuning des LLM avec Blackwell, la série RTX 50 et Unsloth | Unsloth DocumentationmicrochipFine-tuning des LLM avec Blackwell, la série RTX 50 et UnslothApprenez à fine-tuner des LLM sur les GPU Blackwell RTX série 50 et B200 de NVIDIA avec notre guide étape par étape.
/docs/fr/blog/fine-tuning-llms-with-nvidia-dgx-spark-and-unslothAllowedFine-tuning des LLM avec NVIDIA DGX Spark et Unsloth | Unsloth DocumentationsparkleFine-tuning des LLM avec NVIDIA DGX Spark et UnslothTutoriel sur la façon de fine-tuner et de faire de l'apprentissage par renforcement (RL) avec OpenAI gpt-oss sur NVIDIA DGX Spark.
/docs/fr/blog/how-to-fine-tune-llms-with-unsloth-and-dockerAllowedComment fine-tuner des LLM avec Unsloth et Docker | Unsloth DocumentationdockerComment fine-tuner des LLM avec Unsloth et DockerApprenez à fine-tuner des LLM ou à faire de l'apprentissage par renforcement (RL) avec l'image Docker d'Unsloth.
/docs/fr/blog/quantization-aware-training-qatAllowedEntraînement sensible à la quantification (QAT) | Unsloth Documentationdown-left-and-up-right-to-centerEntraînement sensible à la quantification (QAT)Quantifiez des modèles en 4 bits avec Unsloth et PyTorch pour récupérer la précision.
/docs/fr/commencer/fine-tuning-for-beginnersAllowedLe fine-tuning pour les débutants | Unsloth Documentation⭐Le fine-tuning pour les débutants
/docs/fr/commencer/fine-tuning-for-beginners/faq-+-is-fine-tuning-right-for-meAllowedFAQ + Le fine-tuning est-il fait pour moi ? | Unsloth Documentation🤔FAQ + Le fine-tuning est-il fait pour moi ?Si vous hésitez à savoir si le fine-tuning est fait pour vous, regardez ici ! Découvrez les idées reçues sur le fine-tuning, sa comparaison avec RAG, et plus encore :
/docs/fr/commencer/fine-tuning-for-beginners/unsloth-requirementsAllowedPrérequis d'Unsloth | Unsloth Documentation🛠️Prérequis d'UnslothVoici les prérequis d'Unsloth, y compris les exigences système et la VRAM du GPU.
/docs/fr/commencer/fine-tuning-llms-guideAllowedGuide de fine-tuning des LLM | Unsloth Documentation🧬Guide de fine-tuning des LLMApprenez toutes les bases et les meilleures pratiques du fine-tuning. Adapté aux débutants.
/docs/fr/commencer/fine-tuning-llms-guide/datasets-guideAllowedGuide des jeux de données | Unsloth Documentation📈Guide des jeux de donnéesApprenez à créer et préparer un jeu de données pour le fine-tuning.
/docs/fr/commencer/fine-tuning-llms-guide/lora-hyperparameters-guideAllowedGuide des hyperparamètres du fine-tuning LoRA | Unsloth Documentation🧠Guide des hyperparamètres du fine-tuning LoRAApprenez pas à pas les meilleurs réglages de fine-tuning des LLM : rang LoRA et alpha, époques, taille de lot + accumulation de gradients, QLoRA vs LoRA, modules cibles, et plus encore.
/docs/fr/commencer/fine-tuning-llms-guide/tutorial-how-to-finetune-llama-3-and-use-in-ollamaAllowedTutoriel : comment fine-tuner Llama-3 et l'utiliser dans Ollama | Unsloth Documentation🦙Tutoriel : comment fine-tuner Llama-3 et l'utiliser dans OllamaGuide du débutant pour créer un assistant personnel personnalisé (comme ChatGPT) à exécuter localement sur Ollama
/docs/fr/commencer/fine-tuning-llms-guide/what-model-should-i-useAllowedQuel modèle dois-je utiliser pour le fine-tuning ? | Unsloth Documentation❓Quel modèle dois-je utiliser pour le fine-tuning ?
/docs/fr/commencer/installAllowedInstallation d'Unsloth | Unsloth Documentation📥Installation d'UnslothApprenez à installer Unsloth en local ou en ligne.
/docs/fr/commencer/install/amdAllowedGuide pour fine-tuner des LLM sur les GPU AMD avec Unsloth | Unsloth Documentationsquare-up-rightGuide pour fine-tuner des LLM sur les GPU AMD avec UnslothApprenez à fine-tuner de grands modèles de langage (LLM) sur des GPU AMD avec Unsloth.
/docs/fr/commencer/install/conda-installAllowedInstallation via Conda | Unsloth DocumentationsnakeInstallation via CondaPour installer Unsloth en local sur Conda, suivez les étapes ci-dessous :
/docs/fr/commencer/install/dockerAllowedInstaller Unsloth via Docker | Unsloth DocumentationdockerInstaller Unsloth via DockerInstaller Unsloth en utilisant notre conteneur Docker officiel
/docs/fr/commencer/install/google-colabAllowedGoogle Colab | Unsloth DocumentationgoogleGoogle ColabPour installer et exécuter Unsloth sur Google Colab, suivez les étapes ci-dessous :
/docs/fr/commencer/install/intelAllowedFine-tuning des LLM sur GPU Intel avec Unsloth | Unsloth DocumentationinfoFine-tuning des LLM sur GPU Intel avec UnslothApprenez à entraîner et fine-tuner de grands modèles de langage sur des GPU Intel.
/docs/fr/commencer/install/macAllowedInstaller Unsloth sur macOS | Unsloth DocumentationappleInstaller Unsloth sur macOS
/docs/fr/commencer/install/pip-installAllowedInstaller Unsloth via pip et uv | Unsloth Documentationdesktop-arrow-downInstaller Unsloth via pip et uvPour installer Unsloth en local via Pip, suivez les étapes ci-dessous :
/docs/fr/commencer/install/updatingAllowedMise à jour d'Unsloth | Unsloth Documentationarrow-rotate-rightMise à jour d'UnslothPour mettre à jour ou utiliser une ancienne version d'Unsloth, suivez les étapes ci-dessous :
/docs/fr/commencer/install/vs-codeAllowedComment fine-tuner des LLM dans VS Code avec Unsloth et les GPU Colab | Unsloth DocumentationvscodeComment fine-tuner des LLM dans VS Code avec Unsloth et les GPU ColabGuide pour fine-tuner des modèles directement dans Visual Studio Code via Unsloth et Google Colab.
/docs/fr/commencer/install/windows-installationAllowedComment fine-tuner des LLM sur Windows avec Unsloth (guide étape par étape) | Unsloth DocumentationwindowsComment fine-tuner des LLM sur Windows avec Unsloth (guide étape par étape)Découvrez comment installer Unsloth sur Windows pour commencer à fine-tuner des LLM localement.
/docs/fr/commencer/reinforcement-learning-rl-guideAllowedGuide de l'apprentissage par renforcement (RL) | Unsloth Documentation💡Guide de l'apprentissage par renforcement (RL)Apprenez tout sur l'apprentissage par renforcement (RL) et comment entraîner votre propre modèle de raisonnement DeepSeek-R1 avec Unsloth en utilisant GRPO. Un guide complet du débutant à l'avancé.
/docs/fr/commencer/reinforcement-learning-rl-guide/advanced-rl-documentationAllowedDocumentation avancée sur l'apprentissage par renforcement | Unsloth Documentation🧩Documentation avancée sur l'apprentissage par renforcementParamètres de documentation avancés lors de l'utilisation d'Unsloth avec GRPO.
/docs/fr/commencer/reinforcement-learning-rl-guide/advanced-rl-documentation/fp16-vs-bf16-for-rlAllowedFP16 vs BF16 pour le RL | Unsloth Documentation⁉️FP16 vs BF16 pour le RLL'article « Defeating the Training-Inference Mismatch via FP16 » https://arxiv.org/pdf/2510.26788 montre pourquoi l'utilisation de float16 est meilleure que bfloat16
/docs/fr/commencer/reinforcement-learning-rl-guide/advanced-rl-docu…tion/gspo-reinforcement-learningAllowedApprentissage par renforcement GSPO | Unsloth Documentationlightbulb-onApprentissage par renforcement GSPOEntraînez avec le RL GSPO (Group Sequence Policy Optimization) dans Unsloth.
/docs/fr/commencer/reinforcement-learning-rl-guide/advanced-rl-documentation/rl-reward-hackingAllowedReward Hacking en RL | Unsloth Documentationtreasure-chestReward Hacking en RLDécouvrez ce qu'est le Reward Hacking en apprentissage par renforcement et comment le contrer.
/docs/fr/commencer/reinforcement-learning-rl-guide/fp8-reinforcement-learningAllowedApprentissage par renforcement FP8 | Unsloth Documentation🎱Apprentissage par renforcement FP8Entraînez l'apprentissage par renforcement (RL) et GRPO en précision FP8 avec Unsloth.
/docs/fr/commencer/reinforcement-learning-rl-guide/grpo-long-contextAllowedApprentissage par renforcement GRPO avec un contexte 7 fois plus long | Unsloth Documentation🌀Apprentissage par renforcement GRPO avec un contexte 7 fois plus longDécouvrez comment Unsloth permet un fine-tuning RL à très long contexte.
/docs/fr/commencer/reinforcement-learning-rl-guide/memory-efficient-rlAllowedRL économe en mémoire | Unsloth DocumentationmemoryRL économe en mémoire
/docs/fr/commencer/reinforcement-learning-rl-guide/preference-dpo-orpo-and-ktoAllowedEntraînement d'optimisation de préférence - DPO, ORPO et KTO | Unsloth Documentation🏆Entraînement d'optimisation de préférence - DPO, ORPO et KTODécouvrez le fine-tuning d'alignement de préférences avec DPO, GRPO, ORPO ou KTO via Unsloth, suivez les étapes ci-dessous :
/docs/fr/commencer/reinforcement-learning-rl-guide/tutorial-train-your-own-reasoning-model-with-grpoAllowedTutoriel : entraînez votre propre modèle de raisonnement avec GRPO | Unsloth Documentation⚡Tutoriel : entraînez votre propre modèle de raisonnement avec GRPOGuide du débutant pour transformer un modèle comme Llama 3.1 (8B) en modèle de raisonnement en utilisant Unsloth et GRPO.
/docs/fr/commencer/reinforcement-learning-rl-guide/vision-reinforcement-learning-vlm-rlAllowedApprentissage par renforcement pour la vision (VLM RL) | Unsloth Documentation👁️‍🗨️Apprentissage par renforcement pour la vision (VLM RL)Entraînez des modèles de vision/multimodaux via GRPO et RL avec Unsloth !
/docs/fr/commencer/unsloth-model-catalogAllowedCatalogue de modèles Unsloth | Unsloth Documentation🔮Catalogue de modèles Unsloth
/docs/fr/commencer/unsloth-notebooksAllowedCarnets Unsloth | Unsloth Documentation📒Carnets UnslothCarnets de fine-tuning : explorez le catalogue Unsloth.
/docs/fr/modeles/glm-5AllowedGLM-5 : guide pour exécuter localement | Unsloth DocumentationzGLM-5 : guide pour exécuter localementExécutez le nouveau modèle GLM-5 de Z.ai sur votre propre appareil local !
/docs/fr/modeles/gpt-oss-how-to-run-and-fine-tuneAllowedgpt-oss : guide d'exécution | Unsloth Documentationopenaigpt-oss : guide d'exécutionExécutez et fine-tunez les nouveaux modèles open source d'OpenAI !
/docs/fr/modeles/gpt-oss-how-to-run-and-fine-tune/gpt-oss-reinforcement-learningAllowedApprentissage par renforcement gpt-oss | Unsloth DocumentationopenaiApprentissage par renforcement gpt-oss
/docs/fr/modeles/gpt-oss-how-to-run-and-fine-tune/gpt-oss-reinforce…ial-how-to-train-gpt-oss-with-rlAllowedTutoriel : comment entraîner gpt-oss avec RL | Unsloth Documentationbook-open-readerTutoriel : comment entraîner gpt-oss avec RLApprenez à entraîner OpenAI gpt-oss avec GRPO pour battre 2048 de manière autonome, localement ou sur Colab.
/docs/fr/modeles/gpt-oss-how-to-run-and-fine-tune/long-context-gpt-oss-trainingAllowedEntraînement gpt-oss à long contexte | Unsloth DocumentationopenaiEntraînement gpt-oss à long contexte
/docs/fr/modeles/gpt-oss-how-to-run-and-fine-tune/tutorial-how-to-fine-tune-gpt-ossAllowedTutoriel : comment fine-tuner gpt-oss | Unsloth DocumentationopenaiTutoriel : comment fine-tuner gpt-ossApprenez pas à pas à entraîner OpenAI gpt-oss localement avec Unsloth.
/docs/fr/modeles/minimax-m25AllowedMiniMax-M2.5 : guide d'exécution | Unsloth DocumentationwaveformMiniMax-M2.5 : guide d'exécutionExécutez MiniMax-M2.5 localement sur votre propre appareil !
You have reached the hard limit of 200 rows as a protection against very large output or exhausted memory. You can change this with --rows-limit.
No rows found, please edit your search term.

OpenGraph metadata

Found 200 row(s).
URL 🔼OG TitleOG DescriptionOG ImageTwitter TitleTwitter DescriptionTwitter Image
/docsUnsloth Docs | Unsloth DocumentationUnsloth is an open-source framework for running and training models./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Unsloth Docs | Unsloth DocumentationUnsloth is an open-source framework for running and training models./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/basics/chat-templatesChat Templates | Unsloth DocumentationLearn the fundamentals and customization options of chat templates, including Conversational, ChatML, ShareGPT, Alpaca formats, and more!/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Chat Templates | Unsloth DocumentationLearn the fundamentals and customization options of chat templates, including Conversational, ChatML, ShareGPT, Alpaca formats, and more!/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/basics/claude-codeHow to Run Local LLMs with Claude Code | Unsloth DocumentationGuide to use open models with Claude Code on your local device./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2How to Run Local LLMs with Claude Code | Unsloth DocumentationGuide to use open models with Claude Code on your local device./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/basics/codexHow to Run Local LLMs with OpenAI Codex | Unsloth DocumentationUse open models with OpenAI Codex on your device locally./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2How to Run Local LLMs with OpenAI Codex | Unsloth DocumentationUse open models with OpenAI Codex on your device locally./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/basics/continued-pretrainingContinued Pretraining | Unsloth DocumentationAKA as Continued Finetuning. Unsloth allows you to continually pretrain so a model can learn a new language./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Continued Pretraining | Unsloth DocumentationAKA as Continued Finetuning. Unsloth allows you to continually pretrain so a model can learn a new language./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/basics/dgx-stationFine-Tuning LLMs on NVIDIA DGX Station with Unsloth | Unsloth DocumentationNVIDIA DGX Station tutorial on how to fine-tune with notebooks from Unsloth./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Fine-Tuning LLMs on NVIDIA DGX Station with Unsloth | Unsloth DocumentationNVIDIA DGX Station tutorial on how to fine-tune with notebooks from Unsloth./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/basics/finetuning-from-last-checkpointFinetuning from Last Checkpoint | Unsloth DocumentationCheckpointing allows you to save your finetuning progress so you can pause it and then continue./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Finetuning from Last Checkpoint | Unsloth DocumentationCheckpointing allows you to save your finetuning progress so you can pause it and then continue./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/basics/inference-and-deploymentInference & Deployment | Unsloth DocumentationLearn how to save your finetuned model so you can run it in your favorite inference engine./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Inference & Deployment | Unsloth DocumentationLearn how to save your finetuned model so you can run it in your favorite inference engine./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/basics/inference-and-deployment/deploy-llms-phoneHow to Run and Deploy LLMs on your iOS or Android Phone | Unsloth DocumentationTutorial for fine-tuning your own LLM and deploying it on your Android or iPhone with ExecuTorch./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2How to Run and Deploy LLMs on your iOS or Android Phone | Unsloth DocumentationTutorial for fine-tuning your own LLM and deploying it on your Android or iPhone with ExecuTorch./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/basics/inference-and-deployment/llama-server-and-openai-endpointllama-server & OpenAI endpoint Deployment Guide | Unsloth DocumentationDeploying via llama-server with an OpenAI compatible endpoint/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2llama-server & OpenAI endpoint Deployment Guide | Unsloth DocumentationDeploying via llama-server with an OpenAI compatible endpoint/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/basics/inference-and-deployment/lm-studioDeploying models to LM Studio | Unsloth DocumentationSaving models to GGUF so you can run and deploy them to LM Studio/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Deploying models to LM Studio | Unsloth DocumentationSaving models to GGUF so you can run and deploy them to LM Studio/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/basics/inference-and-deployment/lm-studio/how-to-install-lm-studio-cli-in-linux-terminalHow to install LM Studio CLI in Linux Terminal | Unsloth DocumentationLM Studio CLI installation guide without a UI in a terminal instance./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2How to install LM Studio CLI in Linux Terminal | Unsloth DocumentationLM Studio CLI installation guide without a UI in a terminal instance./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/basics/inference-and-deployment/saving-to-ggufSaving to GGUF | Unsloth DocumentationSaving models to 16bit for GGUF so you can use it for Ollama, Jan AI, Open WebUI and more!/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Saving to GGUF | Unsloth DocumentationSaving models to 16bit for GGUF so you can use it for Ollama, Jan AI, Open WebUI and more!/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/basics/inference-and-deployment/saving-to-gguf/speculative-decodingSpeculative Decoding | Unsloth DocumentationSpeculative Decoding with llama-server, llama.cpp, vLLM and more for 2x faster inference/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Speculative Decoding | Unsloth DocumentationSpeculative Decoding with llama-server, llama.cpp, vLLM and more for 2x faster inference/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/basics/inference-and-deployment/saving-to-ollamaSaving to Ollama | Unsloth Documentation/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Saving to Ollama | Unsloth Documentation/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/basics/inference-and-deployment/sglang-guideSGLang Deployment & Inference Guide | Unsloth DocumentationGuide on saving and deploying LLMs to SGLang for serving LLMs in production/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2SGLang Deployment & Inference Guide | Unsloth DocumentationGuide on saving and deploying LLMs to SGLang for serving LLMs in production/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/basics/inference-and-deployment/troubleshooting-inferenceTroubleshooting Inference | Unsloth DocumentationIf you're experiencing issues when running or saving your model./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Troubleshooting Inference | Unsloth DocumentationIf you're experiencing issues when running or saving your model./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/basics/inference-and-deployment/unsloth-inferenceUnsloth Inference | Unsloth DocumentationLearn how to run your finetuned model with Unsloth's faster inference./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Unsloth Inference | Unsloth DocumentationLearn how to run your finetuned model with Unsloth's faster inference./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/basics/inference-and-deployment/vllm-guidevLLM Deployment & Inference Guide | Unsloth DocumentationGuide on saving and deploying LLMs to vLLM for serving LLMs in production/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2vLLM Deployment & Inference Guide | Unsloth DocumentationGuide on saving and deploying LLMs to vLLM for serving LLMs in production/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/basics/inference-and-deployment/vllm-guide/lora-hot-swapping-guideLoRA Hot Swapping Guide | Unsloth Documentation/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2LoRA Hot Swapping Guide | Unsloth Documentation/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/basics/inference-and-deployment/vllm-guide/vllm-engine-argumentsvLLM Engine Arguments | Unsloth Documentation/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2vLLM Engine Arguments | Unsloth Documentation/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/basics/multi-gpu-training-with-unslothMulti-GPU Fine-tuning with Unsloth | Unsloth DocumentationLearn how to fine-tune LLMs on multiple GPUs and parallelism with Unsloth./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Multi-GPU Fine-tuning with Unsloth | Unsloth DocumentationLearn how to fine-tune LLMs on multiple GPUs and parallelism with Unsloth./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/basics/multi-gpu-training-with-unsloth/ddpMulti-GPU Fine-tuning with Distributed Data Parallel (DDP) | Unsloth DocumentationLearn how to use the Unsloth CLI to train on multiple GPUs with Distributed Data Parallel (DDP)!/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Multi-GPU Fine-tuning with Distributed Data Parallel (DDP) | Unsloth DocumentationLearn how to use the Unsloth CLI to train on multiple GPUs with Distributed Data Parallel (DDP)!/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/basics/text-to-speech-tts-fine-tuningText-to-Speech (TTS) Fine-tuning Guide | Unsloth DocumentationLearn how to to fine-tune TTS & STT voice models with Unsloth./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Text-to-Speech (TTS) Fine-tuning Guide | Unsloth DocumentationLearn how to to fine-tune TTS & STT voice models with Unsloth./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/basics/tool-calling-guide-for-local-llmsTool Calling Guide for Local LLMs | Unsloth Documentation/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Tool Calling Guide for Local LLMs | Unsloth Documentation/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/basics/troubleshooting-and-faqsTroubleshooting & FAQs | Unsloth DocumentationTips to solve issues, and frequently asked questions./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Troubleshooting & FAQs | Unsloth DocumentationTips to solve issues, and frequently asked questions./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/basics/troubleshooting-and-faqs/hugging-face-hub-xet-debuggingHugging Face Hub, XET debugging | Unsloth DocumentationDebugging, troubleshooting stalled, stuck downloads and slow downloads/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Hugging Face Hub, XET debugging | Unsloth DocumentationDebugging, troubleshooting stalled, stuck downloads and slow downloads/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/basics/unsloth-benchmarksUnsloth Benchmarks | Unsloth DocumentationUnsloth recorded benchmarks on NVIDIA GPUs./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Unsloth Benchmarks | Unsloth DocumentationUnsloth recorded benchmarks on NVIDIA GPUs./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/basics/unsloth-dynamic-2.0-ggufs/unsloth-dynamic-ggufs-on-aider-polyglotUnsloth Dynamic GGUFs on Aider Polyglot | Unsloth DocumentationPerformance of Unsloth Dynamic GGUFs on Aider Polyglot Benchmarks/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Unsloth Dynamic GGUFs on Aider Polyglot | Unsloth DocumentationPerformance of Unsloth Dynamic GGUFs on Aider Polyglot Benchmarks/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/basics/unsloth-environment-flagsUnsloth Environment Flags | Unsloth DocumentationAdvanced flags which might be useful if you see breaking finetunes, or you want to turn stuff off./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Unsloth Environment Flags | Unsloth DocumentationAdvanced flags which might be useful if you see breaking finetunes, or you want to turn stuff off./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/basics/vision-fine-tuningVision Fine-tuning | Unsloth DocumentationLearn how to fine-tune vision/multimodal LLMs with Unsloth/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Vision Fine-tuning | Unsloth DocumentationLearn how to fine-tune vision/multimodal LLMs with Unsloth/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/blog/3x-faster-training-packing3x Faster LLM Training with Unsloth Kernels + Packing | Unsloth DocumentationLearn how Unsloth increases training throughput and eliminates padding waste for fine-tuning./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=23x Faster LLM Training with Unsloth Kernels + Packing | Unsloth DocumentationLearn how Unsloth increases training throughput and eliminates padding waste for fine-tuning./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/blog/500k-context-length-fine-tuning500K Context Length Fine-tuning | Unsloth DocumentationLearn how to enable >500K token context window fine-tuning with Unsloth./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2500K Context Length Fine-tuning | Unsloth DocumentationLearn how to enable >500K token context window fine-tuning with Unsloth./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/blog/comfyuiHow to Run Diffusion Image GGUFs in ComfyUI | Unsloth DocumentationGuide for running Unsloth Diffusion GGUF models in ComfyUI./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2How to Run Diffusion Image GGUFs in ComfyUI | Unsloth DocumentationGuide for running Unsloth Diffusion GGUF models in ComfyUI./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/blog/fine-tuning-llms-with-blackwell-rtx-50-series-and-unslothFine-tuning LLMs with Blackwell, RTX 50 series & Unsloth | Unsloth DocumentationLearn how to fine-tune LLMs on NVIDIA's Blackwell RTX 50 series and B200 GPUs with our step-by-step guide./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Fine-tuning LLMs with Blackwell, RTX 50 series & Unsloth | Unsloth DocumentationLearn how to fine-tune LLMs on NVIDIA's Blackwell RTX 50 series and B200 GPUs with our step-by-step guide./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/blog/fine-tuning-llms-with-nvidia-dgx-spark-and-unslothFine-tuning LLMs with NVIDIA DGX Spark and Unsloth | Unsloth DocumentationTutorial on how to fine-tune and do reinforcement learning (RL) with OpenAI gpt-oss on NVIDIA DGX Spark./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Fine-tuning LLMs with NVIDIA DGX Spark and Unsloth | Unsloth DocumentationTutorial on how to fine-tune and do reinforcement learning (RL) with OpenAI gpt-oss on NVIDIA DGX Spark./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/blog/how-to-fine-tune-llms-with-unsloth-and-dockerHow to Fine-tune LLMs with Unsloth & Docker | Unsloth DocumentationLearn how to fine-tune LLMs or do Reinforcement Learning (RL) with Unsloth's Docker image./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2How to Fine-tune LLMs with Unsloth & Docker | Unsloth DocumentationLearn how to fine-tune LLMs or do Reinforcement Learning (RL) with Unsloth's Docker image./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/blog/quantization-aware-training-qatQuantization-Aware Training (QAT) | Unsloth DocumentationQuantize models to 4-bit with Unsloth and PyTorch to recover accuracy./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Quantization-Aware Training (QAT) | Unsloth DocumentationQuantize models to 4-bit with Unsloth and PyTorch to recover accuracy./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/deUnsloth-Dokumentation | Unsloth DocumentationUnsloth ist ein Open-Source-Framework zum Ausführen und Trainieren von Modellen./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Unsloth-Dokumentation | Unsloth DocumentationUnsloth ist ein Open-Source-Framework zum Ausführen und Trainieren von Modellen./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/blog/3x-faster-training-packing3x schnelleres LLM-Training mit Unsloth-Kernels + Packing | Unsloth DocumentationErfahre, wie Unsloth den Trainingsdurchsatz erhöht und Padding-Verschwendung beim Fine-Tuning eliminiert./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=23x schnelleres LLM-Training mit Unsloth-Kernels + Packing | Unsloth DocumentationErfahre, wie Unsloth den Trainingsdurchsatz erhöht und Padding-Verschwendung beim Fine-Tuning eliminiert./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/blog/500k-context-length-fine-tuningFine-Tuning mit 500K Kontextlänge | Unsloth DocumentationLerne, wie du das Fine-Tuning mit einem Kontextfenster von über 500K Tokens mit Unsloth aktivierst./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Fine-Tuning mit 500K Kontextlänge | Unsloth DocumentationLerne, wie du das Fine-Tuning mit einem Kontextfenster von über 500K Tokens mit Unsloth aktivierst./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/blog/comfyuiWie man Diffusion-Image-GGUFs in ComfyUI ausführt | Unsloth DocumentationAnleitung zum Ausführen von Unsloth-Diffusion-GGUF-Modellen in ComfyUI./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Wie man Diffusion-Image-GGUFs in ComfyUI ausführt | Unsloth DocumentationAnleitung zum Ausführen von Unsloth-Diffusion-GGUF-Modellen in ComfyUI./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/blog/fine-tuning-llms-with-blackwell-rtx-50-series-and-unslothFine-Tuning von LLMs mit Blackwell, RTX 50 Serie & Unsloth | Unsloth DocumentationLerne mit unserer Schritt-für-Schritt-Anleitung, wie man LLMs auf NVIDIA Blackwell, der RTX-50-Serie und B200-GPUs feinabstimmt./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Fine-Tuning von LLMs mit Blackwell, RTX 50 Serie & Unsloth | Unsloth DocumentationLerne mit unserer Schritt-für-Schritt-Anleitung, wie man LLMs auf NVIDIA Blackwell, der RTX-50-Serie und B200-GPUs feinabstimmt./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/blog/fine-tuning-llms-with-nvidia-dgx-spark-and-unslothFine-Tuning von LLMs mit NVIDIA DGX Spark und Unsloth | Unsloth DocumentationTutorial zum Feinabstimmen und für Reinforcement Learning (RL) mit OpenAI gpt-oss auf NVIDIA DGX Spark./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Fine-Tuning von LLMs mit NVIDIA DGX Spark und Unsloth | Unsloth DocumentationTutorial zum Feinabstimmen und für Reinforcement Learning (RL) mit OpenAI gpt-oss auf NVIDIA DGX Spark./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/blog/how-to-fine-tune-llms-with-unsloth-and-dockerWie man LLMs mit Unsloth & Docker feinabstimmt | Unsloth DocumentationLerne, wie man LLMs feinabstimmt oder Reinforcement Learning (RL) mit dem Docker-Image von Unsloth durchführt./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Wie man LLMs mit Unsloth & Docker feinabstimmt | Unsloth DocumentationLerne, wie man LLMs feinabstimmt oder Reinforcement Learning (RL) mit dem Docker-Image von Unsloth durchführt./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/blog/quantization-aware-training-qatQuantization-Aware Training (QAT) | Unsloth DocumentationQuantisiere Modelle mit Unsloth und PyTorch auf 4-Bit, um die Genauigkeit wiederherzustellen./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Quantization-Aware Training (QAT) | Unsloth DocumentationQuantisiere Modelle mit Unsloth und PyTorch auf 4-Bit, um die Genauigkeit wiederherzustellen./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/grundlagen/chat-templatesChat-Vorlagen | Unsloth DocumentationLerne die Grundlagen und Anpassungsoptionen von Chat-Vorlagen kennen, einschließlich Conversational-, ChatML-, ShareGPT-, Alpaca-Formaten und mehr!/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Chat-Vorlagen | Unsloth DocumentationLerne die Grundlagen und Anpassungsoptionen von Chat-Vorlagen kennen, einschließlich Conversational-, ChatML-, ShareGPT-, Alpaca-Formaten und mehr!/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/grundlagen/claude-codeWie man lokale LLMs mit Claude Code ausführt | Unsloth DocumentationAnleitung zur Verwendung offener Modelle mit Claude Code auf deinem lokalen Gerät./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Wie man lokale LLMs mit Claude Code ausführt | Unsloth DocumentationAnleitung zur Verwendung offener Modelle mit Claude Code auf deinem lokalen Gerät./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/grundlagen/codexWie man lokale LLMs mit OpenAI Codex ausführt | Unsloth DocumentationVerwende offene Modelle lokal auf deinem Gerät mit OpenAI Codex./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Wie man lokale LLMs mit OpenAI Codex ausführt | Unsloth DocumentationVerwende offene Modelle lokal auf deinem Gerät mit OpenAI Codex./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/grundlagen/continued-pretrainingFortgesetztes Vortraining | Unsloth DocumentationAuch bekannt als fortgesetztes Fine-Tuning. Unsloth ermöglicht kontinuierliches Vortraining, damit ein Modell eine neue Sprache lernen kann./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Fortgesetztes Vortraining | Unsloth DocumentationAuch bekannt als fortgesetztes Fine-Tuning. Unsloth ermöglicht kontinuierliches Vortraining, damit ein Modell eine neue Sprache lernen kann./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/grundlagen/dgx-stationFine-Tuning von LLMs auf NVIDIA DGX Station mit Unsloth | Unsloth DocumentationTutorial zur NVIDIA DGX Station zum Feinabstimmen mit Notebooks von Unsloth./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Fine-Tuning von LLMs auf NVIDIA DGX Station mit Unsloth | Unsloth DocumentationTutorial zur NVIDIA DGX Station zum Feinabstimmen mit Notebooks von Unsloth./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/grundlagen/finetuning-from-last-checkpointFine-Tuning ab dem letzten Checkpoint | Unsloth DocumentationCheckpointing ermöglicht es dir, deinen Fine-Tuning-Fortschritt zu speichern, damit du pausieren und später fortsetzen kannst./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Fine-Tuning ab dem letzten Checkpoint | Unsloth DocumentationCheckpointing ermöglicht es dir, deinen Fine-Tuning-Fortschritt zu speichern, damit du pausieren und später fortsetzen kannst./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/grundlagen/inference-and-deploymentInferenz & Bereitstellung | Unsloth DocumentationLerne, wie du dein feinabgestimmtes Modell speicherst, damit du es in deiner bevorzugten Inferenz-Engine ausführen kannst./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Inferenz & Bereitstellung | Unsloth DocumentationLerne, wie du dein feinabgestimmtes Modell speicherst, damit du es in deiner bevorzugten Inferenz-Engine ausführen kannst./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/grundlagen/inference-and-deployment/deploy-llms-phoneWie man LLMs auf deinem iOS- oder Android-Smartphone ausführt und bereitstellt | Unsloth DocumentationTutorial zum Feinabstimmen deines eigenen LLM und zur Bereitstellung auf deinem Android- oder iPhone mit ExecuTorch./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Wie man LLMs auf deinem iOS- oder Android-Smartphone ausführt und bereitstellt | Unsloth DocumentationTutorial zum Feinabstimmen deines eigenen LLM und zur Bereitstellung auf deinem Android- oder iPhone mit ExecuTorch./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/grundlagen/inference-and-deployment/llama-server-and-openai-endpointLeitfaden zur Bereitstellung von llama-server & OpenAI-Endpunkt | Unsloth DocumentationBereitstellung über llama-server mit einem OpenAI-kompatiblen Endpunkt/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Leitfaden zur Bereitstellung von llama-server & OpenAI-Endpunkt | Unsloth DocumentationBereitstellung über llama-server mit einem OpenAI-kompatiblen Endpunkt/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/grundlagen/inference-and-deployment/lm-studioModelle in LM Studio bereitstellen | Unsloth DocumentationSpeichern von Modellen als GGUF, damit du sie in LM Studio ausführen und bereitstellen kannst/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Modelle in LM Studio bereitstellen | Unsloth DocumentationSpeichern von Modellen als GGUF, damit du sie in LM Studio ausführen und bereitstellen kannst/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/grundlagen/inference-and-deployment/lm-studio/how-to-insta…-lm-studio-cli-in-linux-terminalSo installierst du die LM Studio CLI im Linux-Terminal | Unsloth DocumentationInstallationsanleitung für LM Studio CLI ohne UI in einer Terminal-Instanz./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2So installierst du die LM Studio CLI im Linux-Terminal | Unsloth DocumentationInstallationsanleitung für LM Studio CLI ohne UI in einer Terminal-Instanz./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/grundlagen/inference-and-deployment/saving-to-ggufSpeichern als GGUF | Unsloth DocumentationSpeichern von Modellen in 16-Bit für GGUF, damit du sie für Ollama, Jan AI, Open WebUI und mehr verwenden kannst!/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Speichern als GGUF | Unsloth DocumentationSpeichern von Modellen in 16-Bit für GGUF, damit du sie für Ollama, Jan AI, Open WebUI und mehr verwenden kannst!/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/grundlagen/inference-and-deployment/saving-to-gguf/speculative-decodingSpekulatives Decoding | Unsloth DocumentationSpekulatives Decoding mit llama-server, llama.cpp, vLLM und mehr für 2x schnellere Inferenz/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Spekulatives Decoding | Unsloth DocumentationSpekulatives Decoding mit llama-server, llama.cpp, vLLM und mehr für 2x schnellere Inferenz/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/grundlagen/inference-and-deployment/saving-to-ollamaSpeichern für Ollama | Unsloth Documentation/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Speichern für Ollama | Unsloth Documentation/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/grundlagen/inference-and-deployment/sglang-guideLeitfaden für SGLang-Bereitstellung & Inferenz | Unsloth DocumentationAnleitung zum Speichern und Bereitstellen von LLMs für SGLang zum Einsatz von LLMs in der Produktion/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Leitfaden für SGLang-Bereitstellung & Inferenz | Unsloth DocumentationAnleitung zum Speichern und Bereitstellen von LLMs für SGLang zum Einsatz von LLMs in der Produktion/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/grundlagen/inference-and-deployment/troubleshooting-inferenceFehlerbehebung bei der Inferenz | Unsloth DocumentationWenn du Probleme beim Ausführen oder Speichern deines Modells hast./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Fehlerbehebung bei der Inferenz | Unsloth DocumentationWenn du Probleme beim Ausführen oder Speichern deines Modells hast./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/grundlagen/inference-and-deployment/unsloth-inferenceUnsloth-Inferenz | Unsloth DocumentationLerne, wie du dein feinabgestimmtes Modell mit der schnelleren Inferenz von Unsloth ausführst./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Unsloth-Inferenz | Unsloth DocumentationLerne, wie du dein feinabgestimmtes Modell mit der schnelleren Inferenz von Unsloth ausführst./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/grundlagen/inference-and-deployment/vllm-guideLeitfaden für vLLM-Bereitstellung & Inferenz | Unsloth DocumentationAnleitung zum Speichern und Bereitstellen von LLMs für vLLM zum Einsatz von LLMs in der Produktion/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Leitfaden für vLLM-Bereitstellung & Inferenz | Unsloth DocumentationAnleitung zum Speichern und Bereitstellen von LLMs für vLLM zum Einsatz von LLMs in der Produktion/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/grundlagen/inference-and-deployment/vllm-guide/lora-hot-swapping-guideLeitfaden zum Hot-Swapping von LoRA | Unsloth Documentation/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Leitfaden zum Hot-Swapping von LoRA | Unsloth Documentation/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/grundlagen/inference-and-deployment/vllm-guide/vllm-engine-argumentsvLLM-Engine-Argumente | Unsloth Documentation/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2vLLM-Engine-Argumente | Unsloth Documentation/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/grundlagen/multi-gpu-training-with-unslothMulti-GPU-Fine-Tuning mit Unsloth | Unsloth DocumentationLerne, wie man LLMs auf mehreren GPUs und mit Parallelisierung mit Unsloth feinabstimmt./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Multi-GPU-Fine-Tuning mit Unsloth | Unsloth DocumentationLerne, wie man LLMs auf mehreren GPUs und mit Parallelisierung mit Unsloth feinabstimmt./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/grundlagen/multi-gpu-training-with-unsloth/ddpMulti-GPU-Fine-Tuning mit Distributed Data Parallel (DDP) | Unsloth DocumentationLerne, wie du die Unsloth-CLI nutzt, um mit Distributed Data Parallel (DDP) auf mehreren GPUs zu trainieren!/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Multi-GPU-Fine-Tuning mit Distributed Data Parallel (DDP) | Unsloth DocumentationLerne, wie du die Unsloth-CLI nutzt, um mit Distributed Data Parallel (DDP) auf mehreren GPUs zu trainieren!/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/grundlagen/text-to-speech-tts-fine-tuningLeitfaden zum Fine-Tuning von Text-to-Speech (TTS) | Unsloth DocumentationLerne, wie du TTS- und STT-Sprachmodelle mit Unsloth feinabstimmst./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Leitfaden zum Fine-Tuning von Text-to-Speech (TTS) | Unsloth DocumentationLerne, wie du TTS- und STT-Sprachmodelle mit Unsloth feinabstimmst./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/grundlagen/tool-calling-guide-for-local-llmsLeitfaden zum Tool Calling für lokale LLMs | Unsloth Documentation/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Leitfaden zum Tool Calling für lokale LLMs | Unsloth Documentation/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/grundlagen/troubleshooting-and-faqsFehlerbehebung & FAQs | Unsloth DocumentationTipps zur Lösung von Problemen und häufig gestellte Fragen./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Fehlerbehebung & FAQs | Unsloth DocumentationTipps zur Lösung von Problemen und häufig gestellte Fragen./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/grundlagen/troubleshooting-and-faqs/hugging-face-hub-xet-debuggingHugging Face Hub, XET-Debugging | Unsloth DocumentationDebugging und Fehlerbehebung bei hängenden, feststeckenden und langsamen Downloads/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Hugging Face Hub, XET-Debugging | Unsloth DocumentationDebugging und Fehlerbehebung bei hängenden, feststeckenden und langsamen Downloads/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/grundlagen/unsloth-benchmarksUnsloth-Benchmarks | Unsloth DocumentationVon Unsloth aufgezeichnete Benchmarks auf NVIDIA-GPUs./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Unsloth-Benchmarks | Unsloth DocumentationVon Unsloth aufgezeichnete Benchmarks auf NVIDIA-GPUs./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/grundlagen/unsloth-dynamic-2.0-ggufs/unsloth-dynamic-ggufs-on-aider-polyglotUnsloth Dynamic GGUFs auf Aider Polyglot | Unsloth DocumentationLeistung der Unsloth Dynamic GGUFs in Aider-Polyglot-Benchmarks/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Unsloth Dynamic GGUFs auf Aider Polyglot | Unsloth DocumentationLeistung der Unsloth Dynamic GGUFs in Aider-Polyglot-Benchmarks/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/grundlagen/unsloth-environment-flagsUnsloth-Umgebungsflags | Unsloth DocumentationErweiterte Flags, die nützlich sein könnten, wenn Fine-Tunes fehlschlagen oder du etwas deaktivieren möchtest./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Unsloth-Umgebungsflags | Unsloth DocumentationErweiterte Flags, die nützlich sein könnten, wenn Fine-Tunes fehlschlagen oder du etwas deaktivieren möchtest./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/grundlagen/vision-fine-tuningVision-Fine-Tuning | Unsloth DocumentationLerne, wie du Vision-/Multimodal-LLMs mit Unsloth feinabstimmst/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Vision-Fine-Tuning | Unsloth DocumentationLerne, wie du Vision-/Multimodal-LLMs mit Unsloth feinabstimmst/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/los-gehts/fine-tuning-for-beginnersFine-Tuning für Anfänger | Unsloth Documentation/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Fine-Tuning für Anfänger | Unsloth Documentation/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/los-gehts/fine-tuning-for-beginners/faq-+-is-fine-tuning-right-for-meFAQ + Ist Fine-Tuning das Richtige für mich? | Unsloth DocumentationWenn du unsicher bist, ob Fine-Tuning das Richtige für dich ist, schau hier nach! Erfahre mehr über Missverständnisse beim Fine-Tuning, wie es sich im Vergleich zu RAG verhält und mehr:/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2FAQ + Ist Fine-Tuning das Richtige für mich? | Unsloth DocumentationWenn du unsicher bist, ob Fine-Tuning das Richtige für dich ist, schau hier nach! Erfahre mehr über Missverständnisse beim Fine-Tuning, wie es sich im Vergleich zu RAG verhält und mehr:/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/los-gehts/fine-tuning-for-beginners/unsloth-requirementsUnsloth-Anforderungen | Unsloth DocumentationHier sind die Anforderungen von Unsloth, einschließlich System- und GPU-VRAM-Anforderungen./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Unsloth-Anforderungen | Unsloth DocumentationHier sind die Anforderungen von Unsloth, einschließlich System- und GPU-VRAM-Anforderungen./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/los-gehts/fine-tuning-llms-guideLeitfaden zum Fine-Tuning von LLMs | Unsloth DocumentationLerne alle Grundlagen und Best Practices des Fine-Tunings. Für Anfänger geeignet./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Leitfaden zum Fine-Tuning von LLMs | Unsloth DocumentationLerne alle Grundlagen und Best Practices des Fine-Tunings. Für Anfänger geeignet./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/los-gehts/fine-tuning-llms-guide/datasets-guideDatensatz-Leitfaden | Unsloth DocumentationLerne, wie man einen Datensatz für das Fine-Tuning erstellt und vorbereitet./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Datensatz-Leitfaden | Unsloth DocumentationLerne, wie man einen Datensatz für das Fine-Tuning erstellt und vorbereitet./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/los-gehts/fine-tuning-llms-guide/lora-hyperparameters-guideLeitfaden zu Hyperparametern für LoRA-Fine-Tuning | Unsloth DocumentationLerne Schritt für Schritt die besten Einstellungen für das Fine-Tuning von LLMs - LoRA-Rang & Alpha, Epochen, Batchgröße + Gradient Accumulation, QLoRA vs. LoRA, Zielmodule und mehr./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Leitfaden zu Hyperparametern für LoRA-Fine-Tuning | Unsloth DocumentationLerne Schritt für Schritt die besten Einstellungen für das Fine-Tuning von LLMs - LoRA-Rang & Alpha, Epochen, Batchgröße + Gradient Accumulation, QLoRA vs. LoRA, Zielmodule und mehr./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/los-gehts/fine-tuning-llms-guide/tutorial-how-to-finetune-llama-3-and-use-in-ollamaTutorial: Wie man Llama-3 feinabstimmt und in Ollama verwendet | Unsloth DocumentationEinsteigerleitfaden zur Erstellung eines personalisierten Assistenten (wie ChatGPT), der lokal auf Ollama läuft/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Tutorial: Wie man Llama-3 feinabstimmt und in Ollama verwendet | Unsloth DocumentationEinsteigerleitfaden zur Erstellung eines personalisierten Assistenten (wie ChatGPT), der lokal auf Ollama läuft/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/los-gehts/fine-tuning-llms-guide/what-model-should-i-useWelches Modell sollte ich für Fine-Tuning verwenden? | Unsloth Documentation/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Welches Modell sollte ich für Fine-Tuning verwenden? | Unsloth Documentation/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/los-gehts/installUnsloth-Installation | Unsloth DocumentationLerne, Unsloth lokal oder online zu installieren./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Unsloth-Installation | Unsloth DocumentationLerne, Unsloth lokal oder online zu installieren./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/los-gehts/install/amdAnleitung zum Fine-Tuning von LLMs auf AMD-GPUs mit Unsloth | Unsloth DocumentationErfahre, wie du große Sprachmodelle (LLMs) auf AMD-GPUs mit Unsloth feinabstimmst./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Anleitung zum Fine-Tuning von LLMs auf AMD-GPUs mit Unsloth | Unsloth DocumentationErfahre, wie du große Sprachmodelle (LLMs) auf AMD-GPUs mit Unsloth feinabstimmst./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/los-gehts/install/conda-installConda-Installation | Unsloth DocumentationUm Unsloth lokal mit Conda zu installieren, befolge die folgenden Schritte:/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Conda-Installation | Unsloth DocumentationUm Unsloth lokal mit Conda zu installieren, befolge die folgenden Schritte:/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/los-gehts/install/dockerUnsloth via Docker installieren | Unsloth DocumentationUnsloth mit unserem offiziellen Docker-Container installieren/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Unsloth via Docker installieren | Unsloth DocumentationUnsloth mit unserem offiziellen Docker-Container installieren/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/los-gehts/install/google-colabGoogle Colab | Unsloth DocumentationUm Unsloth auf Google Colab zu installieren und auszuführen, befolge die folgenden Schritte:/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Google Colab | Unsloth DocumentationUm Unsloth auf Google Colab zu installieren und auszuführen, befolge die folgenden Schritte:/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/los-gehts/install/intelFine-Tuning von LLMs auf Intel-GPUs mit Unsloth | Unsloth DocumentationErfahre, wie man große Sprachmodelle auf Intel-GPUs trainiert und feinabstimmt./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Fine-Tuning von LLMs auf Intel-GPUs mit Unsloth | Unsloth DocumentationErfahre, wie man große Sprachmodelle auf Intel-GPUs trainiert und feinabstimmt./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/los-gehts/install/macUnsloth auf MacOS installieren | Unsloth Documentation/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Unsloth auf MacOS installieren | Unsloth Documentation/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/los-gehts/install/pip-installUnsloth via pip und uv installieren | Unsloth DocumentationUm Unsloth lokal über Pip zu installieren, befolge die folgenden Schritte:/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Unsloth via pip und uv installieren | Unsloth DocumentationUm Unsloth lokal über Pip zu installieren, befolge die folgenden Schritte:/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/los-gehts/install/updatingUnsloth aktualisieren | Unsloth DocumentationUm eine alte Version von Unsloth zu aktualisieren oder zu verwenden, befolge die folgenden Schritte:/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Unsloth aktualisieren | Unsloth DocumentationUm eine alte Version von Unsloth zu aktualisieren oder zu verwenden, befolge die folgenden Schritte:/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/los-gehts/install/vs-codeWie man LLMs in VS Code mit Unsloth & Colab-GPUs feinabstimmt | Unsloth DocumentationAnleitung zum direkten Fine-Tuning von Modellen in Visual Studio Code über Unsloth und Google Colab./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Wie man LLMs in VS Code mit Unsloth & Colab-GPUs feinabstimmt | Unsloth DocumentationAnleitung zum direkten Fine-Tuning von Modellen in Visual Studio Code über Unsloth und Google Colab./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/los-gehts/install/windows-installationWie man LLMs unter Windows mit Unsloth feinabstimmt (Schritt-für-Schritt-Anleitung) | Unsloth DocumentationSieh dir an, wie man Unsloth unter Windows installiert, um mit dem lokalen Fine-Tuning von LLMs zu beginnen./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Wie man LLMs unter Windows mit Unsloth feinabstimmt (Schritt-für-Schritt-Anleitung) | Unsloth DocumentationSieh dir an, wie man Unsloth unter Windows installiert, um mit dem lokalen Fine-Tuning von LLMs zu beginnen./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/los-gehts/reinforcement-learning-rl-guideLeitfaden zu Reinforcement Learning (RL) | Unsloth DocumentationErfahre alles über Reinforcement Learning (RL) und wie du mit Unsloth mithilfe von GRPO dein eigenes DeepSeek-R1-Reasoning-Modell trainierst. Ein vollständiger Leitfaden von Anfänger bis Fortgeschrittene./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Leitfaden zu Reinforcement Learning (RL) | Unsloth DocumentationErfahre alles über Reinforcement Learning (RL) und wie du mit Unsloth mithilfe von GRPO dein eigenes DeepSeek-R1-Reasoning-Modell trainierst. Ein vollständiger Leitfaden von Anfänger bis Fortgeschrittene./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/los-gehts/reinforcement-learning-rl-guide/advanced-rl-documentationErweiterte Dokumentation zu Reinforcement Learning | Unsloth DocumentationErweiterte Dokumentationseinstellungen bei der Verwendung von Unsloth mit GRPO./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Erweiterte Dokumentation zu Reinforcement Learning | Unsloth DocumentationErweiterte Dokumentationseinstellungen bei der Verwendung von Unsloth mit GRPO./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/los-gehts/reinforcement-learning-rl-guide/advanced-rl-documentation/fp16-vs-bf16-for-rlFP16 vs. BF16 für RL | Unsloth DocumentationDefeating the Training-Inference Mismatch via FP16 https://arxiv.org/pdf/2510.26788 zeigt, dass die Verwendung von float16 besser ist als bfloat16/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2FP16 vs. BF16 für RL | Unsloth DocumentationDefeating the Training-Inference Mismatch via FP16 https://arxiv.org/pdf/2510.26788 zeigt, dass die Verwendung von float16 besser ist als bfloat16/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/los-gehts/reinforcement-learning-rl-guide/advanced-rl-docu…tion/gspo-reinforcement-learningGSPO Reinforcement Learning | Unsloth DocumentationTrainiere mit GSPO (Group Sequence Policy Optimization) RL in Unsloth./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2GSPO Reinforcement Learning | Unsloth DocumentationTrainiere mit GSPO (Group Sequence Policy Optimization) RL in Unsloth./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/los-gehts/reinforcement-learning-rl-guide/advanced-rl-documentation/rl-reward-hackingRL Reward Hacking | Unsloth DocumentationErfahre, was Reward Hacking im Reinforcement Learning ist und wie man ihm entgegenwirkt./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2RL Reward Hacking | Unsloth DocumentationErfahre, was Reward Hacking im Reinforcement Learning ist und wie man ihm entgegenwirkt./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/los-gehts/reinforcement-learning-rl-guide/fp8-reinforcement-learningFP8 Reinforcement Learning | Unsloth DocumentationTrainiere Reinforcement Learning (RL) und GRPO mit FP8-Präzision mit Unsloth./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2FP8 Reinforcement Learning | Unsloth DocumentationTrainiere Reinforcement Learning (RL) und GRPO mit FP8-Präzision mit Unsloth./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/los-gehts/reinforcement-learning-rl-guide/grpo-long-contextReinforcement Learning GRPO mit 7x längerem Kontext | Unsloth DocumentationErfahre, wie Unsloth ultra-langes Kontext-Fine-Tuning für RL ermöglicht./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Reinforcement Learning GRPO mit 7x längerem Kontext | Unsloth DocumentationErfahre, wie Unsloth ultra-langes Kontext-Fine-Tuning für RL ermöglicht./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/los-gehts/reinforcement-learning-rl-guide/memory-efficient-rlSpeichereffizientes RL | Unsloth Documentation/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Speichereffizientes RL | Unsloth Documentation/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/los-gehts/reinforcement-learning-rl-guide/preference-dpo-orpo-and-ktoTraining zur Präferenzoptimierung - DPO, ORPO & KTO | Unsloth DocumentationErfahre mehr über Fine-Tuning zur Präferenzanpassung mit DPO, GRPO, ORPO oder KTO über Unsloth, befolge die folgenden Schritte:/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Training zur Präferenzoptimierung - DPO, ORPO & KTO | Unsloth DocumentationErfahre mehr über Fine-Tuning zur Präferenzanpassung mit DPO, GRPO, ORPO oder KTO über Unsloth, befolge die folgenden Schritte:/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/los-gehts/reinforcement-learning-rl-guide/tutorial-train-your-own-reasoning-model-with-grpoTutorial: Trainiere dein eigenes Reasoning-Modell mit GRPO | Unsloth DocumentationEinsteigerleitfaden zur Umwandlung eines Modells wie Llama 3.1 (8B) in ein Reasoning-Modell mithilfe von Unsloth und GRPO./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Tutorial: Trainiere dein eigenes Reasoning-Modell mit GRPO | Unsloth DocumentationEinsteigerleitfaden zur Umwandlung eines Modells wie Llama 3.1 (8B) in ein Reasoning-Modell mithilfe von Unsloth und GRPO./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/los-gehts/reinforcement-learning-rl-guide/vision-reinforcement-learning-vlm-rlVision-Reinforcement Learning (VLM RL) | Unsloth DocumentationTrainiere Vision-/Multimodalmodelle mit GRPO und RL mit Unsloth!/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Vision-Reinforcement Learning (VLM RL) | Unsloth DocumentationTrainiere Vision-/Multimodalmodelle mit GRPO und RL mit Unsloth!/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/los-gehts/unsloth-model-catalogUnsloth-Modellkatalog | Unsloth Documentation/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Unsloth-Modellkatalog | Unsloth Documentation/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/los-gehts/unsloth-notebooksUnsloth-Notebooks | Unsloth DocumentationFine-Tuning-Notebooks: Entdecke den Unsloth-Katalog./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Unsloth-Notebooks | Unsloth DocumentationFine-Tuning-Notebooks: Entdecke den Unsloth-Katalog./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/modelle/glm-5GLM-5: Anleitung zum lokalen Ausführen | Unsloth DocumentationFühre das neue GLM-5-Modell von Z.ai auf deinem eigenen lokalen Gerät aus!/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2GLM-5: Anleitung zum lokalen Ausführen | Unsloth DocumentationFühre das neue GLM-5-Modell von Z.ai auf deinem eigenen lokalen Gerät aus!/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/modelle/gpt-oss-how-to-run-and-fine-tunegpt-oss: Ausführungsanleitung | Unsloth DocumentationFühre die neuen Open-Source-Modelle von OpenAI aus und feinabstimme sie!/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2gpt-oss: Ausführungsanleitung | Unsloth DocumentationFühre die neuen Open-Source-Modelle von OpenAI aus und feinabstimme sie!/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/modelle/gpt-oss-how-to-run-and-fine-tune/gpt-oss-reinforcement-learninggpt-oss Reinforcement Learning | Unsloth Documentation/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2gpt-oss Reinforcement Learning | Unsloth Documentation/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/modelle/gpt-oss-how-to-run-and-fine-tune/gpt-oss-reinforce…ial-how-to-train-gpt-oss-with-rlTutorial: Wie man gpt-oss mit RL trainiert | Unsloth DocumentationLerne, OpenAI gpt-oss mit GRPO zu trainieren, um 2048 autonom lokal oder auf Colab zu schlagen./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Tutorial: Wie man gpt-oss mit RL trainiert | Unsloth DocumentationLerne, OpenAI gpt-oss mit GRPO zu trainieren, um 2048 autonom lokal oder auf Colab zu schlagen./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/modelle/gpt-oss-how-to-run-and-fine-tune/long-context-gpt-oss-trainingLanges Kontext-Training für gpt-oss | Unsloth Documentation/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Langes Kontext-Training für gpt-oss | Unsloth Documentation/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/modelle/gpt-oss-how-to-run-and-fine-tune/tutorial-how-to-fine-tune-gpt-ossTutorial: Wie man gpt-oss feinabstimmt | Unsloth DocumentationLerne Schritt für Schritt, wie man OpenAI gpt-oss lokal mit Unsloth trainiert./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Tutorial: Wie man gpt-oss feinabstimmt | Unsloth DocumentationLerne Schritt für Schritt, wie man OpenAI gpt-oss lokal mit Unsloth trainiert./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/modelle/minimax-m25MiniMax-M2.5: Anleitung zum Ausführen | Unsloth DocumentationFühre MiniMax-M2.5 lokal auf deinem eigenen Gerät aus!/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2MiniMax-M2.5: Anleitung zum Ausführen | Unsloth DocumentationFühre MiniMax-M2.5 lokal auf deinem eigenen Gerät aus!/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/modelle/nemotron-3NVIDIA Nemotron 3 Nano - Anleitung zum Ausführen | Unsloth DocumentationFühre NVIDIA Nemotron 3 Nano lokal auf deinem Gerät aus und feinabstimme es!/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2NVIDIA Nemotron 3 Nano - Anleitung zum Ausführen | Unsloth DocumentationFühre NVIDIA Nemotron 3 Nano lokal auf deinem Gerät aus und feinabstimme es!/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/modelle/nemotron-3/nemotron-3-superNVIDIA Nemotron-3-Super: Anleitung zum Ausführen | Unsloth DocumentationFühre NVIDIA Nemotron-3-Super-120B-A12B lokal auf deinem Gerät aus und feinabstimme es!/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2NVIDIA Nemotron-3-Super: Anleitung zum Ausführen | Unsloth DocumentationFühre NVIDIA Nemotron-3-Super-120B-A12B lokal auf deinem Gerät aus und feinabstimme es!/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/modelle/qwen3-coder-nextQwen3-Coder-Next: So führst du es lokal aus | Unsloth DocumentationAnleitung zum lokalen Ausführen von Qwen3-Coder-Next auf deinem Gerät!/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Qwen3-Coder-Next: So führst du es lokal aus | Unsloth DocumentationAnleitung zum lokalen Ausführen von Qwen3-Coder-Next auf deinem Gerät!/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/modelle/qwen3-how-to-run-and-fine-tuneQwen3 - Wie man es ausführt & feinabstimmt | Unsloth DocumentationLerne, Qwen3 lokal mit Unsloth + unseren Dynamic 2.0 Quants auszuführen und feinabzustimmen/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Qwen3 - Wie man es ausführt & feinabstimmt | Unsloth DocumentationLerne, Qwen3 lokal mit Unsloth + unseren Dynamic 2.0 Quants auszuführen und feinabzustimmen/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/modelle/qwen3-how-to-run-and-fine-tune/qwen3-2507Qwen3-2507: Anleitung zum lokalen Ausführen | Unsloth DocumentationFühre die Thinking- und Instruct-Versionen von Qwen3-30B-A3B-2507 und 235B-A22B lokal auf deinem Gerät aus!/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Qwen3-2507: Anleitung zum lokalen Ausführen | Unsloth DocumentationFühre die Thinking- und Instruct-Versionen von Qwen3-30B-A3B-2507 und 235B-A22B lokal auf deinem Gerät aus!/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/modelle/qwen3-how-to-run-and-fine-tune/qwen3-vl-how-to-run-and-fine-tuneQwen3-VL: Anleitung zum Ausführen | Unsloth DocumentationLerne, Qwen3-VL lokal mit Unsloth feinabzustimmen und auszuführen./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Qwen3-VL: Anleitung zum Ausführen | Unsloth DocumentationLerne, Qwen3-VL lokal mit Unsloth feinabzustimmen und auszuführen./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/modelle/qwen3.5/fine-tuneQwen3.5 Fine-Tuning-Leitfaden | Unsloth DocumentationErfahre, wie du Qwen3.5-LLMs mit Unsloth feinabstimmst./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Qwen3.5 Fine-Tuning-Leitfaden | Unsloth DocumentationErfahre, wie du Qwen3.5-LLMs mit Unsloth feinabstimmst./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/modelle/qwen3.5/gguf-benchmarksQwen3.5 GGUF-Benchmarks | Unsloth DocumentationSieh dir an, wie Unsloth Dynamic GGUFs abschneiden + Analyse von Perplexity, KL-Divergenz und MXFP4./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Qwen3.5 GGUF-Benchmarks | Unsloth DocumentationSieh dir an, wie Unsloth Dynamic GGUFs abschneiden + Analyse von Perplexity, KL-Divergenz und MXFP4./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/modelle/tutorialsTutorials zu Large Language Models (LLMs) | Unsloth DocumentationEntdecke die neuesten LLMs und erfahre, wie du Modelle lokal für optimale Leistung mit Unsloth ausführen und feinabstimmen kannst./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Tutorials zu Large Language Models (LLMs) | Unsloth DocumentationEntdecke die neuesten LLMs und erfahre, wie du Modelle lokal für optimale Leistung mit Unsloth ausführen und feinabstimmen kannst./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/modelle/tutorials/deepseek-ocr-2DeepSeek-OCR 2: Anleitung zum Ausführen & Feinabstimmen | Unsloth DocumentationAnleitung zum lokalen Ausführen und Feinabstimmen von DeepSeek-OCR-2./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2DeepSeek-OCR 2: Anleitung zum Ausführen & Feinabstimmen | Unsloth DocumentationAnleitung zum lokalen Ausführen und Feinabstimmen von DeepSeek-OCR-2./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/modelle/tutorials/deepseek-ocr-how-to-run-and-fine-tuneDeepSeek-OCR: So führst du es aus & feinabstimmst | Unsloth DocumentationAnleitung zum lokalen Ausführen und Feinabstimmen von DeepSeek-OCR./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2DeepSeek-OCR: So führst du es aus & feinabstimmst | Unsloth DocumentationAnleitung zum lokalen Ausführen und Feinabstimmen von DeepSeek-OCR./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/modelle/tutorials/deepseek-r1-0528-how-to-run-locallyDeepSeek-R1-0528: So führst du es lokal aus | Unsloth DocumentationEine Anleitung, wie du DeepSeek-R1-0528 einschließlich Qwen3 auf deinem eigenen lokalen Gerät ausführst!/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2DeepSeek-R1-0528: So führst du es lokal aus | Unsloth DocumentationEine Anleitung, wie du DeepSeek-R1-0528 einschließlich Qwen3 auf deinem eigenen lokalen Gerät ausführst!/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/modelle/tutorials/deepseek-r1-how-to-run-locallyDeepSeek-R1: So führst du es lokal aus | Unsloth DocumentationEine Anleitung, wie du unsere 1,58-Bit Dynamic Quants für DeepSeek-R1 mit llama.cpp ausführen kannst./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2DeepSeek-R1: So führst du es lokal aus | Unsloth DocumentationEine Anleitung, wie du unsere 1,58-Bit Dynamic Quants für DeepSeek-R1 mit llama.cpp ausführen kannst./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/modelle/tutorials/deepseek-v3-0324-how-to-run-locallyDeepSeek-V3-0324: So führst du es lokal aus | Unsloth DocumentationWie man DeepSeek-V3-0324 lokal mit unseren Dynamic Quants ausführt, die die Genauigkeit wiederherstellen/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2DeepSeek-V3-0324: So führst du es lokal aus | Unsloth DocumentationWie man DeepSeek-V3-0324 lokal mit unseren Dynamic Quants ausführt, die die Genauigkeit wiederherstellen/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/modelle/tutorials/devstral-2Devstral 2 - Anleitung zum Ausführen | Unsloth DocumentationAnleitung zum lokalen Ausführen von Mistral-Devstral-2-Modellen: 123B-Instruct-2512 und Small-2-24B-Instruct-2512./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Devstral 2 - Anleitung zum Ausführen | Unsloth DocumentationAnleitung zum lokalen Ausführen von Mistral-Devstral-2-Modellen: 123B-Instruct-2512 und Small-2-24B-Instruct-2512./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/modelle/tutorials/devstral-how-to-run-and-fine-tuneDevstral: So führst du es aus & feinabstimmst | Unsloth DocumentationFühre Mistral Devstral 1.1 aus und feinabstimme es, einschließlich Small-2507 und 2505./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Devstral: So führst du es aus & feinabstimmst | Unsloth DocumentationFühre Mistral Devstral 1.1 aus und feinabstimme es, einschließlich Small-2507 und 2505./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/modelle/tutorials/functiongemmaFunctionGemma: So führst du es aus & feinabstimmst | Unsloth DocumentationLerne, FunctionGemma lokal auf deinem Gerät und Smartphone auszuführen und feinabzustimmen./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2FunctionGemma: So führst du es aus & feinabstimmst | Unsloth DocumentationLerne, FunctionGemma lokal auf deinem Gerät und Smartphone auszuführen und feinabzustimmen./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/modelle/tutorials/gemma-3-how-to-run-and-fine-tuneGemma 3 - Anleitung zum Ausführen | Unsloth DocumentationWie man Gemma 3 effektiv mit unseren GGUFs auf llama.cpp, Ollama, Open WebUI ausführt und mit Unsloth feinabstimmt!/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Gemma 3 - Anleitung zum Ausführen | Unsloth DocumentationWie man Gemma 3 effektiv mit unseren GGUFs auf llama.cpp, Ollama, Open WebUI ausführt und mit Unsloth feinabstimmt!/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/modelle/tutorials/gemma-3-how-to-run-and-fine-tune/gemma-3n-how-to-run-and-fine-tuneGemma 3n: So führst du es aus & feinabstimmst | Unsloth DocumentationFühre Googles neues Gemma 3n lokal mit Dynamic GGUFs auf llama.cpp, Ollama, Open WebUI aus und feinabstimme es mit Unsloth!/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Gemma 3n: So führst du es aus & feinabstimmst | Unsloth DocumentationFühre Googles neues Gemma 3n lokal mit Dynamic GGUFs auf llama.cpp, Ollama, Open WebUI aus und feinabstimme es mit Unsloth!/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/modelle/tutorials/grok-2Grok 2 | Unsloth DocumentationFühre xAIs Grok-2-Modell lokal aus!/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Grok 2 | Unsloth DocumentationFühre xAIs Grok-2-Modell lokal aus!/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/modelle/tutorials/how-to-run-llms-with-dockerWie man lokale LLMs mit Docker ausführt: Schritt-für-Schritt-Anleitung | Unsloth DocumentationLerne, wie man Large Language Models (LLMs) mit Docker & Unsloth auf deinem lokalen Gerät ausführt./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Wie man lokale LLMs mit Docker ausführt: Schritt-für-Schritt-Anleitung | Unsloth DocumentationLerne, wie man Large Language Models (LLMs) mit Docker & Unsloth auf deinem lokalen Gerät ausführt./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/modelle/tutorials/kimi-k2-thinking-how-to-run-locallyKimi K2 Thinking: Anleitung zum lokalen Ausführen | Unsloth DocumentationAnleitung zum Ausführen von Kimi-K2-Thinking und Kimi-K2 auf deinem eigenen lokalen Gerät!/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Kimi K2 Thinking: Anleitung zum lokalen Ausführen | Unsloth DocumentationAnleitung zum Ausführen von Kimi-K2-Thinking und Kimi-K2 auf deinem eigenen lokalen Gerät!/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/modelle/tutorials/llama-4-how-to-run-and-fine-tuneLlama 4: So führst du es aus & feinabstimmst | Unsloth DocumentationWie man Llama 4 lokal mit unseren Dynamic GGUFs ausführt, die im Vergleich zur Standardquantisierung die Genauigkeit wiederherstellen./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Llama 4: So führst du es aus & feinabstimmst | Unsloth DocumentationWie man Llama 4 lokal mit unseren Dynamic GGUFs ausführt, die im Vergleich zur Standardquantisierung die Genauigkeit wiederherstellen./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/modelle/tutorials/magistral-how-to-run-and-fine-tuneMagistral: So führst du es aus & feinabstimmst | Unsloth DocumentationLerne Magistral kennen - die neuen Reasoning-Modelle von Mistral./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Magistral: So führst du es aus & feinabstimmst | Unsloth DocumentationLerne Magistral kennen - die neuen Reasoning-Modelle von Mistral./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/modelle/tutorials/ministral-3Ministral 3 - Anleitung zum Ausführen | Unsloth DocumentationAnleitung für Mistral-Modelle der Reihe Ministral 3, zum lokalen Ausführen oder Feinabstimmen auf deinem Gerät/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Ministral 3 - Anleitung zum Ausführen | Unsloth DocumentationAnleitung für Mistral-Modelle der Reihe Ministral 3, zum lokalen Ausführen oder Feinabstimmen auf deinem Gerät/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/modelle/tutorials/phi-4-reasoning-how-to-run-and-fine-tunePhi-4 Reasoning: So führst du es aus & feinabstimmst | Unsloth DocumentationLerne, Phi-4-Reasoning-Modelle lokal mit Unsloth + unseren Dynamic 2.0 Quants auszuführen und feinabzustimmen/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Phi-4 Reasoning: So führst du es aus & feinabstimmst | Unsloth DocumentationLerne, Phi-4-Reasoning-Modelle lokal mit Unsloth + unseren Dynamic 2.0 Quants auszuführen und feinabzustimmen/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/modelle/tutorials/qwen-image-2512Wie man Qwen-Image-2512 lokal in ComfyUI ausführt | Unsloth DocumentationSchritt-für-Schritt-Tutorial zum Ausführen von Qwen-Image-2512 auf deinem lokalen Gerät mit ComfyUI./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Wie man Qwen-Image-2512 lokal in ComfyUI ausführt | Unsloth DocumentationSchritt-für-Schritt-Tutorial zum Ausführen von Qwen-Image-2512 auf deinem lokalen Gerät mit ComfyUI./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/modelle/tutorials/qwen3-coder-how-to-run-locallyQwen3-Coder: So führst du es lokal aus | Unsloth DocumentationFühre Qwen3-Coder-30B-A3B-Instruct und 480B-A35B lokal mit den Dynamic Quants von Unsloth aus./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Qwen3-Coder: So führst du es lokal aus | Unsloth DocumentationFühre Qwen3-Coder-30B-A3B-Instruct und 480B-A35B lokal mit den Dynamic Quants von Unsloth aus./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/modelle/tutorials/qwen3-nextQwen3-Next: Anleitung zum lokalen Ausführen | Unsloth DocumentationFühre die Versionen Qwen3-Next-80B-A3B-Instruct und Thinking lokal auf deinem Gerät aus!/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Qwen3-Next: Anleitung zum lokalen Ausführen | Unsloth DocumentationFühre die Versionen Qwen3-Next-80B-A3B-Instruct und Thinking lokal auf deinem Gerät aus!/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/modelle/tutorials/qwq-32b-how-to-run-effectivelyQwQ-32B: Wie man es effektiv ausführt | Unsloth DocumentationWie man QwQ-32B mit unseren Fehlerbehebungen und ohne endlose Generierungen + GGUFs effektiv ausführt./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2QwQ-32B: Wie man es effektiv ausführt | Unsloth DocumentationWie man QwQ-32B mit unseren Fehlerbehebungen und ohne endlose Generierungen + GGUFs effektiv ausführt./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/neu/embedding-finetuningLeitfaden zum Fine-Tuning von Embedding-Modellen mit Unsloth | Unsloth DocumentationErfahre, wie du Embedding-Modelle ganz einfach mit Unsloth feinabstimmen kannst./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Leitfaden zum Fine-Tuning von Embedding-Modellen mit Unsloth | Unsloth DocumentationErfahre, wie du Embedding-Modelle ganz einfach mit Unsloth feinabstimmen kannst./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/neu/faster-moeMoE-Modelle mit Unsloth 12x schneller feinabstimmen | Unsloth DocumentationTrainiere MoE-LLMs lokal mit dem Unsloth-Leitfaden./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2MoE-Modelle mit Unsloth 12x schneller feinabstimmen | Unsloth DocumentationTrainiere MoE-LLMs lokal mit dem Unsloth-Leitfaden./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/neu/studioVorstellung von Unsloth Studio | Unsloth DocumentationFühre und trainiere KI-Modelle lokal mit Unsloth Studio./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Vorstellung von Unsloth Studio | Unsloth DocumentationFühre und trainiere KI-Modelle lokal mit Unsloth Studio./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/neu/studio/chatWie man Modelle mit Unsloth Studio ausführt | Unsloth DocumentationFühre KI-Modelle, LLMs und GGUFs lokal mit Unsloth Studio aus./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Wie man Modelle mit Unsloth Studio ausführt | Unsloth DocumentationFühre KI-Modelle, LLMs und GGUFs lokal mit Unsloth Studio aus./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/neu/studio/data-recipeUnsloth Data Recipes | Unsloth DocumentationLerne, wie man Datensätze mit den Data Recipes von Unsloth Studio erstellt, aufbaut und bearbeitet./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Unsloth Data Recipes | Unsloth DocumentationLerne, wie man Datensätze mit den Data Recipes von Unsloth Studio erstellt, aufbaut und bearbeitet./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/neu/studio/exportModelle mit Unsloth Studio exportieren | Unsloth DocumentationErfahre, wie du deine Safetensor- oder LoRA-Modelldateien in GGUF oder andere Formate exportierst./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Modelle mit Unsloth Studio exportieren | Unsloth DocumentationErfahre, wie du deine Safetensor- oder LoRA-Modelldateien in GGUF oder andere Formate exportierst./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/neu/studio/installInstallation von Unsloth Studio | Unsloth DocumentationErfahre, wie du Unsloth Studio auf deinem lokalen Gerät installierst./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Installation von Unsloth Studio | Unsloth DocumentationErfahre, wie du Unsloth Studio auf deinem lokalen Gerät installierst./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/de/neu/studio/startErste Schritte mit Unsloth Studio | Unsloth DocumentationEin Leitfaden für den Einstieg in das Fine-Tuning-Studio, Datenrezepte, den Modellexport und den Chat./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Erste Schritte mit Unsloth Studio | Unsloth DocumentationEin Leitfaden für den Einstieg in das Fine-Tuning-Studio, Datenrezepte, den Modellexport und den Chat./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/frDocumentation Unsloth | Unsloth DocumentationUnsloth est un framework open source pour exécuter et entraîner des modèles./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Documentation Unsloth | Unsloth DocumentationUnsloth est un framework open source pour exécuter et entraîner des modèles./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/fr/blog/3x-faster-training-packingEntraînement LLM 3x plus rapide avec les kernels Unsloth + packing | Unsloth DocumentationApprenez comment Unsloth augmente le débit d'entraînement et élimine le gaspillage de padding pour le fine-tuning./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Entraînement LLM 3x plus rapide avec les kernels Unsloth + packing | Unsloth DocumentationApprenez comment Unsloth augmente le débit d'entraînement et élimine le gaspillage de padding pour le fine-tuning./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/fr/blog/500k-context-length-fine-tuningFine-tuning avec longueur de contexte de 500K | Unsloth DocumentationApprenez à activer le fine-tuning avec une fenêtre de contexte de plus de 500K tokens avec Unsloth./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Fine-tuning avec longueur de contexte de 500K | Unsloth DocumentationApprenez à activer le fine-tuning avec une fenêtre de contexte de plus de 500K tokens avec Unsloth./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/fr/blog/comfyuiComment exécuter des GGUF d'images de diffusion dans ComfyUI | Unsloth DocumentationGuide pour exécuter les modèles Unsloth Diffusion GGUF dans ComfyUI./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Comment exécuter des GGUF d'images de diffusion dans ComfyUI | Unsloth DocumentationGuide pour exécuter les modèles Unsloth Diffusion GGUF dans ComfyUI./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/fr/blog/fine-tuning-llms-with-blackwell-rtx-50-series-and-unslothFine-tuning des LLM avec Blackwell, la série RTX 50 et Unsloth | Unsloth DocumentationApprenez à fine-tuner des LLM sur les GPU Blackwell RTX série 50 et B200 de NVIDIA avec notre guide étape par étape./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Fine-tuning des LLM avec Blackwell, la série RTX 50 et Unsloth | Unsloth DocumentationApprenez à fine-tuner des LLM sur les GPU Blackwell RTX série 50 et B200 de NVIDIA avec notre guide étape par étape./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/fr/blog/fine-tuning-llms-with-nvidia-dgx-spark-and-unslothFine-tuning des LLM avec NVIDIA DGX Spark et Unsloth | Unsloth DocumentationTutoriel sur la façon de fine-tuner et de faire de l'apprentissage par renforcement (RL) avec OpenAI gpt-oss sur NVIDIA DGX Spark./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Fine-tuning des LLM avec NVIDIA DGX Spark et Unsloth | Unsloth DocumentationTutoriel sur la façon de fine-tuner et de faire de l'apprentissage par renforcement (RL) avec OpenAI gpt-oss sur NVIDIA DGX Spark./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/fr/blog/how-to-fine-tune-llms-with-unsloth-and-dockerComment fine-tuner des LLM avec Unsloth et Docker | Unsloth DocumentationApprenez à fine-tuner des LLM ou à faire de l'apprentissage par renforcement (RL) avec l'image Docker d'Unsloth./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Comment fine-tuner des LLM avec Unsloth et Docker | Unsloth DocumentationApprenez à fine-tuner des LLM ou à faire de l'apprentissage par renforcement (RL) avec l'image Docker d'Unsloth./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/fr/blog/quantization-aware-training-qatEntraînement sensible à la quantification (QAT) | Unsloth DocumentationQuantifiez des modèles en 4 bits avec Unsloth et PyTorch pour récupérer la précision./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Entraînement sensible à la quantification (QAT) | Unsloth DocumentationQuantifiez des modèles en 4 bits avec Unsloth et PyTorch pour récupérer la précision./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/fr/commencer/fine-tuning-for-beginnersLe fine-tuning pour les débutants | Unsloth Documentation/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Le fine-tuning pour les débutants | Unsloth Documentation/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/fr/commencer/fine-tuning-for-beginners/faq-+-is-fine-tuning-right-for-meFAQ + Le fine-tuning est-il fait pour moi ? | Unsloth DocumentationSi vous hésitez à savoir si le fine-tuning est fait pour vous, regardez ici ! Découvrez les idées reçues sur le fine-tuning, sa comparaison avec RAG, et plus encore :/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2FAQ + Le fine-tuning est-il fait pour moi ? | Unsloth DocumentationSi vous hésitez à savoir si le fine-tuning est fait pour vous, regardez ici ! Découvrez les idées reçues sur le fine-tuning, sa comparaison avec RAG, et plus encore :/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/fr/commencer/fine-tuning-for-beginners/unsloth-requirementsPrérequis d'Unsloth | Unsloth DocumentationVoici les prérequis d'Unsloth, y compris les exigences système et la VRAM du GPU./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Prérequis d'Unsloth | Unsloth DocumentationVoici les prérequis d'Unsloth, y compris les exigences système et la VRAM du GPU./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/fr/commencer/fine-tuning-llms-guideGuide de fine-tuning des LLM | Unsloth DocumentationApprenez toutes les bases et les meilleures pratiques du fine-tuning. Adapté aux débutants./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Guide de fine-tuning des LLM | Unsloth DocumentationApprenez toutes les bases et les meilleures pratiques du fine-tuning. Adapté aux débutants./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/fr/commencer/fine-tuning-llms-guide/datasets-guideGuide des jeux de données | Unsloth DocumentationApprenez à créer et préparer un jeu de données pour le fine-tuning./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Guide des jeux de données | Unsloth DocumentationApprenez à créer et préparer un jeu de données pour le fine-tuning./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/fr/commencer/fine-tuning-llms-guide/lora-hyperparameters-guideGuide des hyperparamètres du fine-tuning LoRA | Unsloth DocumentationApprenez pas à pas les meilleurs réglages de fine-tuning des LLM : rang LoRA et alpha, époques, taille de lot + accumulation de gradients, QLoRA vs LoRA, modules cibles, et plus encore./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Guide des hyperparamètres du fine-tuning LoRA | Unsloth DocumentationApprenez pas à pas les meilleurs réglages de fine-tuning des LLM : rang LoRA et alpha, époques, taille de lot + accumulation de gradients, QLoRA vs LoRA, modules cibles, et plus encore./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/fr/commencer/fine-tuning-llms-guide/tutorial-how-to-finetune-llama-3-and-use-in-ollamaTutoriel : comment fine-tuner Llama-3 et l'utiliser dans Ollama | Unsloth DocumentationGuide du débutant pour créer un assistant personnel personnalisé (comme ChatGPT) à exécuter localement sur Ollama/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Tutoriel : comment fine-tuner Llama-3 et l'utiliser dans Ollama | Unsloth DocumentationGuide du débutant pour créer un assistant personnel personnalisé (comme ChatGPT) à exécuter localement sur Ollama/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/fr/commencer/fine-tuning-llms-guide/what-model-should-i-useQuel modèle dois-je utiliser pour le fine-tuning ? | Unsloth Documentation/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Quel modèle dois-je utiliser pour le fine-tuning ? | Unsloth Documentation/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/fr/commencer/installInstallation d'Unsloth | Unsloth DocumentationApprenez à installer Unsloth en local ou en ligne./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Installation d'Unsloth | Unsloth DocumentationApprenez à installer Unsloth en local ou en ligne./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/fr/commencer/install/amdGuide pour fine-tuner des LLM sur les GPU AMD avec Unsloth | Unsloth DocumentationApprenez à fine-tuner de grands modèles de langage (LLM) sur des GPU AMD avec Unsloth./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Guide pour fine-tuner des LLM sur les GPU AMD avec Unsloth | Unsloth DocumentationApprenez à fine-tuner de grands modèles de langage (LLM) sur des GPU AMD avec Unsloth./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/fr/commencer/install/conda-installInstallation via Conda | Unsloth DocumentationPour installer Unsloth en local sur Conda, suivez les étapes ci-dessous :/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Installation via Conda | Unsloth DocumentationPour installer Unsloth en local sur Conda, suivez les étapes ci-dessous :/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/fr/commencer/install/dockerInstaller Unsloth via Docker | Unsloth DocumentationInstaller Unsloth en utilisant notre conteneur Docker officiel/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Installer Unsloth via Docker | Unsloth DocumentationInstaller Unsloth en utilisant notre conteneur Docker officiel/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/fr/commencer/install/google-colabGoogle Colab | Unsloth DocumentationPour installer et exécuter Unsloth sur Google Colab, suivez les étapes ci-dessous :/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Google Colab | Unsloth DocumentationPour installer et exécuter Unsloth sur Google Colab, suivez les étapes ci-dessous :/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/fr/commencer/install/intelFine-tuning des LLM sur GPU Intel avec Unsloth | Unsloth DocumentationApprenez à entraîner et fine-tuner de grands modèles de langage sur des GPU Intel./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Fine-tuning des LLM sur GPU Intel avec Unsloth | Unsloth DocumentationApprenez à entraîner et fine-tuner de grands modèles de langage sur des GPU Intel./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/fr/commencer/install/macInstaller Unsloth sur macOS | Unsloth Documentation/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Installer Unsloth sur macOS | Unsloth Documentation/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/fr/commencer/install/pip-installInstaller Unsloth via pip et uv | Unsloth DocumentationPour installer Unsloth en local via Pip, suivez les étapes ci-dessous :/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Installer Unsloth via pip et uv | Unsloth DocumentationPour installer Unsloth en local via Pip, suivez les étapes ci-dessous :/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/fr/commencer/install/updatingMise à jour d'Unsloth | Unsloth DocumentationPour mettre à jour ou utiliser une ancienne version d'Unsloth, suivez les étapes ci-dessous :/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Mise à jour d'Unsloth | Unsloth DocumentationPour mettre à jour ou utiliser une ancienne version d'Unsloth, suivez les étapes ci-dessous :/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/fr/commencer/install/vs-codeComment fine-tuner des LLM dans VS Code avec Unsloth et les GPU Colab | Unsloth DocumentationGuide pour fine-tuner des modèles directement dans Visual Studio Code via Unsloth et Google Colab./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Comment fine-tuner des LLM dans VS Code avec Unsloth et les GPU Colab | Unsloth DocumentationGuide pour fine-tuner des modèles directement dans Visual Studio Code via Unsloth et Google Colab./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/fr/commencer/install/windows-installationComment fine-tuner des LLM sur Windows avec Unsloth (guide étape par étape) | Unsloth DocumentationDécouvrez comment installer Unsloth sur Windows pour commencer à fine-tuner des LLM localement./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Comment fine-tuner des LLM sur Windows avec Unsloth (guide étape par étape) | Unsloth DocumentationDécouvrez comment installer Unsloth sur Windows pour commencer à fine-tuner des LLM localement./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/fr/commencer/reinforcement-learning-rl-guideGuide de l'apprentissage par renforcement (RL) | Unsloth DocumentationApprenez tout sur l'apprentissage par renforcement (RL) et comment entraîner votre propre modèle de raisonnement DeepSeek-R1 avec Unsloth en utilisant GRPO. Un guide complet du débutant à l'avancé./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Guide de l'apprentissage par renforcement (RL) | Unsloth DocumentationApprenez tout sur l'apprentissage par renforcement (RL) et comment entraîner votre propre modèle de raisonnement DeepSeek-R1 avec Unsloth en utilisant GRPO. Un guide complet du débutant à l'avancé./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/fr/commencer/reinforcement-learning-rl-guide/advanced-rl-documentationDocumentation avancée sur l'apprentissage par renforcement | Unsloth DocumentationParamètres de documentation avancés lors de l'utilisation d'Unsloth avec GRPO./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Documentation avancée sur l'apprentissage par renforcement | Unsloth DocumentationParamètres de documentation avancés lors de l'utilisation d'Unsloth avec GRPO./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/fr/commencer/reinforcement-learning-rl-guide/advanced-rl-documentation/fp16-vs-bf16-for-rlFP16 vs BF16 pour le RL | Unsloth DocumentationL'article « Defeating the Training-Inference Mismatch via FP16 » https://arxiv.org/pdf/2510.26788 montre pourquoi l'utilisation de float16 est meilleure que bfloat16/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2FP16 vs BF16 pour le RL | Unsloth DocumentationL'article « Defeating the Training-Inference Mismatch via FP16 » https://arxiv.org/pdf/2510.26788 montre pourquoi l'utilisation de float16 est meilleure que bfloat16/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/fr/commencer/reinforcement-learning-rl-guide/advanced-rl-docu…tion/gspo-reinforcement-learningApprentissage par renforcement GSPO | Unsloth DocumentationEntraînez avec le RL GSPO (Group Sequence Policy Optimization) dans Unsloth./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Apprentissage par renforcement GSPO | Unsloth DocumentationEntraînez avec le RL GSPO (Group Sequence Policy Optimization) dans Unsloth./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/fr/commencer/reinforcement-learning-rl-guide/advanced-rl-documentation/rl-reward-hackingReward Hacking en RL | Unsloth DocumentationDécouvrez ce qu'est le Reward Hacking en apprentissage par renforcement et comment le contrer./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Reward Hacking en RL | Unsloth DocumentationDécouvrez ce qu'est le Reward Hacking en apprentissage par renforcement et comment le contrer./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/fr/commencer/reinforcement-learning-rl-guide/fp8-reinforcement-learningApprentissage par renforcement FP8 | Unsloth DocumentationEntraînez l'apprentissage par renforcement (RL) et GRPO en précision FP8 avec Unsloth./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Apprentissage par renforcement FP8 | Unsloth DocumentationEntraînez l'apprentissage par renforcement (RL) et GRPO en précision FP8 avec Unsloth./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/fr/commencer/reinforcement-learning-rl-guide/grpo-long-contextApprentissage par renforcement GRPO avec un contexte 7 fois plus long | Unsloth DocumentationDécouvrez comment Unsloth permet un fine-tuning RL à très long contexte./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Apprentissage par renforcement GRPO avec un contexte 7 fois plus long | Unsloth DocumentationDécouvrez comment Unsloth permet un fine-tuning RL à très long contexte./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/fr/commencer/reinforcement-learning-rl-guide/memory-efficient-rlRL économe en mémoire | Unsloth Documentation/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2RL économe en mémoire | Unsloth Documentation/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/fr/commencer/reinforcement-learning-rl-guide/preference-dpo-orpo-and-ktoEntraînement d'optimisation de préférence - DPO, ORPO et KTO | Unsloth DocumentationDécouvrez le fine-tuning d'alignement de préférences avec DPO, GRPO, ORPO ou KTO via Unsloth, suivez les étapes ci-dessous :/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Entraînement d'optimisation de préférence - DPO, ORPO et KTO | Unsloth DocumentationDécouvrez le fine-tuning d'alignement de préférences avec DPO, GRPO, ORPO ou KTO via Unsloth, suivez les étapes ci-dessous :/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/fr/commencer/reinforcement-learning-rl-guide/tutorial-train-your-own-reasoning-model-with-grpoTutoriel : entraînez votre propre modèle de raisonnement avec GRPO | Unsloth DocumentationGuide du débutant pour transformer un modèle comme Llama 3.1 (8B) en modèle de raisonnement en utilisant Unsloth et GRPO./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Tutoriel : entraînez votre propre modèle de raisonnement avec GRPO | Unsloth DocumentationGuide du débutant pour transformer un modèle comme Llama 3.1 (8B) en modèle de raisonnement en utilisant Unsloth et GRPO./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/fr/commencer/reinforcement-learning-rl-guide/vision-reinforcement-learning-vlm-rlApprentissage par renforcement pour la vision (VLM RL) | Unsloth DocumentationEntraînez des modèles de vision/multimodaux via GRPO et RL avec Unsloth !/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Apprentissage par renforcement pour la vision (VLM RL) | Unsloth DocumentationEntraînez des modèles de vision/multimodaux via GRPO et RL avec Unsloth !/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/fr/commencer/unsloth-model-catalogCatalogue de modèles Unsloth | Unsloth Documentation/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Catalogue de modèles Unsloth | Unsloth Documentation/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/fr/commencer/unsloth-notebooksCarnets Unsloth | Unsloth DocumentationCarnets de fine-tuning : explorez le catalogue Unsloth./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Carnets Unsloth | Unsloth DocumentationCarnets de fine-tuning : explorez le catalogue Unsloth./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/fr/modeles/glm-5GLM-5 : guide pour exécuter localement | Unsloth DocumentationExécutez le nouveau modèle GLM-5 de Z.ai sur votre propre appareil local !/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2GLM-5 : guide pour exécuter localement | Unsloth DocumentationExécutez le nouveau modèle GLM-5 de Z.ai sur votre propre appareil local !/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/fr/modeles/gpt-oss-how-to-run-and-fine-tunegpt-oss : guide d'exécution | Unsloth DocumentationExécutez et fine-tunez les nouveaux modèles open source d'OpenAI !/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2gpt-oss : guide d'exécution | Unsloth DocumentationExécutez et fine-tunez les nouveaux modèles open source d'OpenAI !/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/fr/modeles/gpt-oss-how-to-run-and-fine-tune/gpt-oss-reinforcement-learningApprentissage par renforcement gpt-oss | Unsloth Documentation/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Apprentissage par renforcement gpt-oss | Unsloth Documentation/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/fr/modeles/gpt-oss-how-to-run-and-fine-tune/gpt-oss-reinforce…ial-how-to-train-gpt-oss-with-rlTutoriel : comment entraîner gpt-oss avec RL | Unsloth DocumentationApprenez à entraîner OpenAI gpt-oss avec GRPO pour battre 2048 de manière autonome, localement ou sur Colab./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Tutoriel : comment entraîner gpt-oss avec RL | Unsloth DocumentationApprenez à entraîner OpenAI gpt-oss avec GRPO pour battre 2048 de manière autonome, localement ou sur Colab./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/fr/modeles/gpt-oss-how-to-run-and-fine-tune/long-context-gpt-oss-trainingEntraînement gpt-oss à long contexte | Unsloth Documentation/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Entraînement gpt-oss à long contexte | Unsloth Documentation/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/fr/modeles/gpt-oss-how-to-run-and-fine-tune/tutorial-how-to-fine-tune-gpt-ossTutoriel : comment fine-tuner gpt-oss | Unsloth DocumentationApprenez pas à pas à entraîner OpenAI gpt-oss localement avec Unsloth./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2Tutoriel : comment fine-tuner gpt-oss | Unsloth DocumentationApprenez pas à pas à entraîner OpenAI gpt-oss localement avec Unsloth./docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
/docs/fr/modeles/minimax-m25MiniMax-M2.5 : guide d'exécution | Unsloth DocumentationExécutez MiniMax-M2.5 localement sur votre propre appareil !/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2MiniMax-M2.5 : guide d'exécution | Unsloth DocumentationExécutez MiniMax-M2.5 localement sur votre propre appareil !/docs/~gitbook/image?url=https%3A%2F%2F2815821428-files.gitbook.io%…00&height=630&sign=b1ca68fa&sv=2
You have reached the hard limit of 200 rows as a protection against very large output or exhausted memory. You can change this with --rows-limit.
No rows found, please edit your search term.

Heading structure

Found 200 row(s).
Heading structureCountErrors 🔽URL
  • <h1> ⚠️Troubleshooting & FAQs
    • <h3> hashtagFine-tuning a new model not supported by Unsloth? [#fine-tuning-a-new-model-not-supported-by-unsloth]
    • <h3> hashtagRunning in Unsloth works well, but after exporting & running on other platforms, the results are poor [#running-in-unsloth-works-well-but-after-exporting-and-running-on-other-platforms-the-results-are-poo]
    • <h3> hashtagSaving to GGUF / vLLM 16bit crashes [#saving-to-gguf-vllm-16bit-crashes]
    • <h3> hashtagHow do I manually save to GGUF? [#how-do-i-manually-save-to-gguf]
    • <h3> hashtagWhy is Q8_K_XL slower than Q8_0 GGUF? [#why-is-q8_k_xl-slower-than-q8_0-gguf]
    • <h3> hashtagHow to do Evaluation [#how-to-do-evaluation]
    • <h3> hashtagEvaluation Loop - Out of Memory or crashing. [#evaluation-loop-out-of-memory-or-crashing]
    • <h3> hashtagHow do I do Early Stopping? [#how-do-i-do-early-stopping]
    • <h3> hashtagDownloading gets stuck at 90 to 95% [#downloading-gets-stuck-at-90-to-95]
    • <h3> hashtagRuntimeError: CUDA error: device-side assert triggered [#runtimeerror-cuda-error-device-side-assert-triggered]
    • <h3> hashtagAll labels in your dataset are -100. Training losses will be all 0. [#all-labels-in-your-dataset-are-100.-training-losses-will-be-all-0]
    • <h3> hashtagUnsloth is slower than expected? [#unsloth-is-slower-than-expected]
    • <h3> hashtagSome weights of Gemma3nForConditionalGeneration were not initialized from the model checkpoint [#some-weights-of-gemma3nforconditionalgeneration-were-not-initialized-from-the-model-checkpoint]
    • <h3> hashtagNotImplementedError: A UTF-8 locale is required. Got ANSI [#notimplementederror-a-utf-8-locale-is-required.-got-ansi]
    • <h3> hashtagCiting Unsloth [#citing-unsloth]
1615/docs/basics/troubleshooting-and-faqs
  • <h1> ⚠️Dépannage et FAQ
    • <h3> hashtagAffiner un nouveau modèle non pris en charge par Unsloth ? [#affiner-un-nouveau-modele-non-pris-en-charge-par-unsloth]
    • <h3> hashtagExécuter dans Unsloth fonctionne bien, mais après exportation et exécution sur d'autres plates-formes, les résultats sont médiocres [#executer-dans-unsloth-fonctionne-bien-mais-apres-exportation-et-execution-sur-dautres-plates-formes]
    • <h3> hashtagEnregistrer en GGUF / vLLM 16 bits plante [#enregistrer-en-gguf-vllm-16-bits-plante]
    • <h3> hashtagComment enregistrer manuellement au format GGUF ? [#comment-enregistrer-manuellement-au-format-gguf]
    • <h3> hashtagPourquoi Q8_K_XL est-il plus lent que Q8_0 GGUF ? [#pourquoi-q8_k_xl-est-il-plus-lent-que-q8_0-gguf]
    • <h3> hashtagComment faire l'évaluation [#comment-faire-levaluation]
    • <h3> hashtagBoucle d'évaluation - Mémoire insuffisante ou plantage. [#boucle-devaluation-memoire-insuffisante-ou-plantage]
    • <h3> hashtagComment faire un Early Stopping ? [#comment-faire-un-early-stopping]
    • <h3> hashtagLe téléchargement reste bloqué à 90 à 95 % [#le-telechargement-reste-bloque-a-90-a-95]
    • <h3> hashtagRuntimeError : CUDA error: device-side assert triggered [#runtimeerror-cuda-error-device-side-assert-triggered]
    • <h3> hashtagToutes les étiquettes de votre jeu de données sont -100. Les pertes d'entraînement seront toutes à 0. [#toutes-les-etiquettes-de-votre-jeu-de-donnees-sont-100.-les-pertes-dentrainement-seront-toutes-a-0]
    • <h3> hashtagUnsloth est plus lent que prévu ? [#unsloth-est-plus-lent-que-prevu]
    • <h3> hashtagCertains poids de Gemma3nForConditionalGeneration n'ont pas été initialisés à partir du checkpoint du modèle [#certains-poids-de-gemma3nforconditionalgeneration-nont-pas-ete-initialises-a-partir-du-checkpoint-du]
    • <h3> hashtagNotImplementedError : Un paramètre régional UTF-8 est requis. ANSI détecté [#notimplementederror-un-parametre-regional-utf-8-est-requis.-ansi-detecte]
    • <h3> hashtagCiter Unsloth [#citer-unsloth]
1615/docs/fr/notions-de-base/troubleshooting-and-faqs
  • <h1> ⚠️故障排查与常见问题
    • <h3> hashtag要微调 Unsloth 不支持的新模型? [#yao-wei-tiao-unsloth-bu-zhi-chi-de-xin-mo-xing]
    • <h3> hashtag在 Unsloth 中运行效果良好,但导出并在其他平台运行后结果很差 [#zai-unsloth-zhong-yun-xing-xiao-guo-liang-hao-dan-dao-chu-bing-zai-qi-ta-ping-tai-yun-xing-hou-jie-g]
    • <h3> hashtag保存为 GGUF / vLLM 16bit 崩溃 [#bao-cun-wei-gguf-vllm-16bit-beng-kui]
    • <h3> hashtag如何手动保存为 GGUF? [#ru-he-shou-dong-bao-cun-wei-gguf]
    • <h3> hashtag为什么 Q8_K_XL 比 Q8_0 GGUF 慢? [#wei-shen-me-q8kxl-bi-q80-gguf-man]
    • <h3> hashtag如何进行评估 [#ru-he-jin-xing-ping-gu]
    • <h3> hashtag评估循环 - 内存不足或崩溃。 [#ping-gu-xun-huan-nei-cun-bu-zu-huo-beng-kui]
    • <h3> hashtag如何实现早停(Early Stopping)? [#ru-he-shi-xian-zao-ting-early-stopping]
    • <h3> hashtag下载卡在 90% 到 95% [#xia-zai-ka-zai-90-dao-95]
    • <h3> hashtagRuntimeError: CUDA error: device-side assert triggered [#runtimeerror-cuda-error-device-side-assert-triggered]
    • <h3> hashtag您的数据集中所有标签都是 -100。训练损失将全部为 0。 [#nin-de-shu-ju-ji-zhong-suo-you-biao-qian-dou-shi-100-xun-lian-sun-shi-jiang-quan-bu-wei-0]
    • <h3> hashtagUnsloth 性能比预期慢? [#unsloth-xing-neng-bi-yu-qi-man]
    • <h3> hashtagGemma3nForConditionalGeneration 的某些权重未从模型检查点初始化 [#gemma3nforconditionalgeneration-de-mou-xie-quan-zhong-wei-cong-mo-xing-jian-cha-dian-chu-shi-hua]
    • <h3> hashtagNotImplementedError: 需要 UTF-8 区域设置。得到的是 ANSI [#notimplementederror-xu-yao-utf8-qu-yu-she-zhi-de-dao-de-shi-ansi]
    • <h3> hashtag引用 Unsloth [#yin-yong-unsloth]
1615/docs/zh/ji-chu-zhi-shi/troubleshooting-and-faqs
  • <h1> ⚠️トラブルシューティングとFAQ
    • <h3> hashtagUnsloth がサポートしていない新しいモデルをファインチューニングしますか? [#unsloth-gasaptoshiteinaishiimoderuwofainchningushimasuka]
    • <h3> hashtagUnslothでの実行はうまくいきますが、エクスポートして他のプラットフォームで実行すると結果が悪い [#unslothdenohaumakuikimasugaekusuptoshitenopurattofmudesurutogai]
    • <h3> hashtagGGUF / vLLM 16bit への保存がクラッシュする [#gguf-vllm-16bit-henogakurasshusuru]
    • <h3> hashtag手動で GGUF に保存するにはどうすればよいですか? [#de-gguf-nisurunihadousurebayoidesuka]
    • <h3> hashtagなぜ Q8_K_XL は Q8_0 GGUF より遅いのか? [#naze-q8kxl-ha-q80-gguf-yoriinoka]
    • <h3> hashtag評価(Evaluation)の方法 [#evaluationno]
    • <h3> hashtag評価ループ - メモリ不足(OOM)やクラッシュ [#rpu-memorioomyakurasshu]
    • <h3> hashtag早期終了(Early Stopping)はどう行いますか? [#early-stoppinghadouimasuka]
    • <h3> hashtagダウンロードが 90〜95% で止まる [#daunrdoga-9095-demaru]
    • <h3> hashtagRuntimeError: CUDA error: device-side assert triggered [#runtimeerror-cuda-error-device-side-assert-triggered]
    • <h3> hashtagデータセット内のすべてのラベルが -100 になっています。トレーニング損失はすべて 0 になります。 [#dtasettonosubetenoraberuga-100-ninatteimasutorninguhasubete-0-ninarimasu]
    • <h3> hashtagUnsloth が期待より遅いですか? [#unsloth-gayoriidesuka]
    • <h3> hashtagGemma3nForConditionalGeneration の一部の重みがモデルチェックポイントから初期化されていませんでした [#gemma3nforconditionalgeneration-nonomigamoderuchekkupointokarasareteimasendeshita]
    • <h3> hashtagNotImplementedError: UTF-8 ロケールが必要です。ANSI が検出されました [#notimplementederror-utf-8-rokrugadesuansi-gasaremashita]
    • <h3> hashtagUnsloth の引用(Citing Unsloth) [#unsloth-nociting-unsloth]
1615/docs/jp/ji-ben/troubleshooting-and-faqs
  • <h1> ⚠️Fehlerbehebung & FAQs
    • <h3> hashtagFeinabstimmung eines neuen Modells, das von Unsloth nicht unterstützt wird? [#feinabstimmung-eines-neuen-modells-das-von-unsloth-nicht-unterstutzt-wird]
    • <h3> hashtagDas Ausführen in Unsloth funktioniert gut, aber nach dem Export und dem Ausführen auf anderen Plattformen sind die Ergebnisse schlecht [#das-ausfuhren-in-unsloth-funktioniert-gut-aber-nach-dem-export-und-dem-ausfuhren-auf-anderen-plattfo]
    • <h3> hashtagSpeichern in GGUF / vLLM 16bit stürzt ab [#speichern-in-gguf-vllm-16bit-sturzt-ab]
    • <h3> hashtagWie speichere ich manuell in GGUF? [#wie-speichere-ich-manuell-in-gguf]
    • <h3> hashtagWarum ist Q8_K_XL langsamer als Q8_0 GGUF? [#warum-ist-q8_k_xl-langsamer-als-q8_0-gguf]
    • <h3> hashtagWie man Evaluation durchführt [#wie-man-evaluation-durchfuhrt]
    • <h3> hashtagEvaluationsschleife – Out of Memory oder Absturz. [#evaluationsschleife-out-of-memory-oder-absturz]
    • <h3> hashtagWie mache ich Early Stopping? [#wie-mache-ich-early-stopping]
    • <h3> hashtagDownload bleibt bei 90 bis 95% hängen [#download-bleibt-bei-90-bis-95-hangen]
    • <h3> hashtagRuntimeError: CUDA error: device-side assert triggered [#runtimeerror-cuda-error-device-side-assert-triggered]
    • <h3> hashtagAlle Labels in Ihrem Dataset sind -100. Trainingsverluste werden alle 0 sein. [#alle-labels-in-ihrem-dataset-sind-100.-trainingsverluste-werden-alle-0-sein]
    • <h3> hashtagUnsloth ist langsamer als erwartet? [#unsloth-ist-langsamer-als-erwartet]
    • <h3> hashtagEinige Gewichte von Gemma3nForConditionalGeneration wurden nicht aus dem Modell-Checkpoint initialisiert [#einige-gewichte-von-gemma3nforconditionalgeneration-wurden-nicht-aus-dem-modell-checkpoint-initialis]
    • <h3> hashtagNotImplementedError: Es wird eine UTF-8-Locale benötigt. ANSI erhalten [#notimplementederror-es-wird-eine-utf-8-locale-benotigt.-ansi-erhalten]
    • <h3> hashtagUnsloth zitieren [#unsloth-zitieren]
1615/docs/de/grundlagen/troubleshooting-and-faqs
  • <h1> 🎱FP8 Reinforcement Learning
    • <h3> hashtag🌻FP8 vs BF16 Training [#fp8-vs-bf16-training]
    • <h3> hashtag⚡FP8 Performance Benchmarks [#fp8-performance-benchmarks]
    • <h3> hashtag⛩️Inference = 96% of RL training [#inference-96-of-rl-training]
    • <h3> hashtag🔢60% less memory usage [#id-60-less-memory-usage]
    • <h3> hashtag❓How to use FP8 RL / installation [#how-to-use-fp8-rl-installation]
    • <h3> hashtag💿Implementing FP8 Training [#implementing-fp8-training]
    • <h3> hashtag🔥TorchAO Collab [#torchao-collab]
    • <h3> hashtag🐦On the fly TorchAO FP8 quantization [#on-the-fly-torchao-fp8-quantization]
    • <h3> hashtag🎉Unsloth FP8 uploads [#unsloth-fp8-uploads]
    • <h3> hashtag💁Acknowledgements [#acknowledgements]
1110/docs/get-started/reinforcement-learning-rl-guide/fp8-reinforcement-learning
  • <h1> 🎱FP8 强化学习
    • <h3> hashtag🌻FP8 与 BF16 的训练比较 [#fp8-yu-bf16-de-xun-lian-bi-jiao]
    • <h3> hashtag⚡FP8 性能基准 [#fp8-xing-neng-ji-zhun]
    • <h3> hashtag⛩️推理占 RL 训练的 96% [#tui-li-zhan-rl-xun-lian-de-96]
    • <h3> hashtag🔢显存使用减少 60% [#xian-cun-shi-yong-jian-shao-60]
    • <h3> hashtag❓如何使用 FP8 RL / 安装 [#ru-he-shi-yong-fp8-rl-an-zhuang]
    • <h3> hashtag💿实现 FP8 训练 [#shi-xian-fp8-xun-lian]
    • <h3> hashtag🔥TorchAO 合作 [#torchao-he-zuo]
    • <h3> hashtag🐦按需 TorchAO FP8 量化 [#an-xu-torchao-fp8-liang-hua]
    • <h3> hashtag🎉Unsloth 的 FP8 上传 [#unsloth-de-fp8-shang-chuan]
    • <h3> hashtag💁致谢 [#zhi-xie]
1110/docs/zh/kai-shi-shi-yong/reinforcement-learning-rl-guide/fp8-reinforcement-learning
  • <h1> 🎱Apprentissage par renforcement FP8
    • <h3> hashtag🌻FP8 vs BF16 Entraînement [#fp8-vs-bf16-entrainement]
    • <h3> hashtag⚡Benchmarks de performance FP8 [#benchmarks-de-performance-fp8]
    • <h3> hashtag⛩️L'inférence = 96% de l'entraînement RL [#linference-96-de-lentrainement-rl]
    • <h3> hashtag🔢60% moins d'utilisation mémoire [#id-60-moins-dutilisation-memoire]
    • <h3> hashtag❓Comment utiliser FP8 RL / installation [#comment-utiliser-fp8-rl-installation]
    • <h3> hashtag💿Implémentation de l'entraînement FP8 [#implementation-de-lentrainement-fp8]
    • <h3> hashtag🔥Collab TorchAO [#collab-torchao]
    • <h3> hashtag🐦Quantification FP8 à la volée TorchAO [#quantification-fp8-a-la-volee-torchao]
    • <h3> hashtag🎉Téléversements FP8 Unsloth [#televersements-fp8-unsloth]
    • <h3> hashtag💁Remerciements [#remerciements]
1110/docs/fr/commencer/reinforcement-learning-rl-guide/fp8-reinforcement-learning
  • <h1> 🎱FP8 Reinforcement Learning
    • <h3> hashtag🌻FP8 vs BF16 Training [#fp8-vs-bf16-training]
    • <h3> hashtag⚡FP8 Leistungs-Benchmarks [#fp8-leistungs-benchmarks]
    • <h3> hashtag⛩️Inference = 96% des RL-Trainings [#inference-96-des-rl-trainings]
    • <h3> hashtag🔢60% weniger Speicherverbrauch [#id-60-weniger-speicherverbrauch]
    • <h3> hashtag❓Wie man FP8 RL verwendet / Installation [#wie-man-fp8-rl-verwendet-installation]
    • <h3> hashtag💿Implementierung von FP8-Training [#implementierung-von-fp8-training]
    • <h3> hashtag🔥TorchAO Zusammenarbeit [#torchao-zusammenarbeit]
    • <h3> hashtag🐦On-the-fly TorchAO FP8-Quantisierung [#on-the-fly-torchao-fp8-quantisierung]
    • <h3> hashtag🎉Unsloth FP8 Uploads [#unsloth-fp8-uploads]
    • <h3> hashtag💁Danksagungen [#danksagungen]
1110/docs/de/los-gehts/reinforcement-learning-rl-guide/fp8-reinforcement-learning
  • <h1> SGLang Deployment & Inference Guide
    • <h3> hashtag💻Installing SGLang [#installing-sglang]
    • <h3> hashtag🐛Debugging SGLang Installation issues [#debugging-sglang-installation-issues]
    • <h3> hashtag🚚Deploying SGLang models [#deploying-sglang-models]
    • <h3> hashtag🦥Deploying Unsloth finetunes in SGLang [#deploying-unsloth-finetunes-in-sglang]
    • <h3> hashtag🚃gpt-oss-20b: Unsloth & SGLang Deployment Guide [#gpt-oss-20b-unsloth-and-sglang-deployment-guide]
    • <h3> hashtag💎FP8 Online Quantization [#fp8-online-quantization]
    • <h3> hashtag⚡Benchmarking SGLang [#benchmarking-sglang]
    • <h3> hashtag🏃SGLang Interactive Offline Mode [#sglang-interactive-offline-mode]
    • <h3> hashtag🎇GGUFs in SGLang [#ggufs-in-sglang]
    • <h3> hashtag🎬High throughput GGUF serving with SGLang [#high-throughput-gguf-serving-with-sglang]
1110/docs/basics/inference-and-deployment/sglang-guide
  • <h1> 🎱FP8強化学習
    • <h3> hashtag🌻FP8対BF16のトレーニング [#fp8bf16notorningu]
    • <h3> hashtag⚡FP8パフォーマンスベンチマーク [#fp8pafmansubenchimku]
    • <h3> hashtag⛩️推論はRLトレーニングの96%に相当 [#harltorninguno96ni]
    • <h3> hashtag🔢メモリ使用量を60%削減 [#memoriwo60]
    • <h3> hashtag❓FP8 RL の使い方 / インストール方法 [#fp8-rl-noi-insutru]
    • <h3> hashtag💿FP8トレーニングの実装 [#fp8torninguno]
    • <h3> hashtag🔥 TorchAOとのコラボ [#torchaotonokorabo]
    • <h3> hashtag🐦追記:DeepSeekのDeepGEMMも試しましたが、エンドツーエンドで完全に統合してクリーンに比較できる状態にすることはできませんでした。 [#deepseeknodeepgemmmoshimashitagaendotsendodenishitekurnnidekirunisurukotohadekimasendeshita]
    • <h3> hashtag🎉load_in_fp8 = True, # ブロックFP8なら"block"、行FP8ならTrue、無効ならFalse [#loadinfp8-true-burokkufp8narablockfp8naratruenarafalse]
    • <h3> hashtag💁h-small — [#h-small]
1110/docs/jp/meru/reinforcement-learning-rl-guide/fp8-reinforcement-learning
  • <h1> SGLang 部署与推理指南
    • <h3> hashtag💻安装 SGLang [#an-zhuang-sglang]
    • <h3> hashtag🐛调试 SGLang 安装问题 [#tiao-shi-sglang-an-zhuang-wen-ti]
    • <h3> hashtag🚚部署 SGLang 模型 [#bu-shu-sglang-mo-xing]
    • <h3> hashtag🦥 在 SGLang 中部署 Unsloth 微调模型 [#zai-sglang-zhong-bu-shu-unsloth-wei-tiao-mo-xing]
    • <h3> hashtag🚃gpt-oss-20b:Unsloth 与 SGLang 部署指南 [#gptoss20bunsloth-yu-sglang-bu-shu-zhi-nan]
    • <h3> hashtag💎FP8 在线量化 [#fp8-zai-xian-liang-hua]
    • <h3> hashtag⚡ SGLang 基准测试 [#sglang-ji-zhun-ce-shi]
    • <h3> hashtag🏃SGLang 交互式离线模式 [#sglang-jiao-hu-shi-li-xian-mo-shi]
    • <h3> hashtag🎇SGLang 中的 GGUF [#sglang-zhong-de-gguf]
    • <h3> hashtag🎬使用 SGLang 提供高吞吐量的 GGUF 服务 [#shi-yong-sglang-ti-gong-gao-tun-tu-liang-de-gguf-fu-wu]
1110/docs/zh/ji-chu-zhi-shi/inference-and-deployment/sglang-guide
  • <h1> Guide de déploiement et d'inférence SGLang
    • <h3> hashtag💻Installation de SGLang [#installation-de-sglang]
    • <h3> hashtag🐛Débogage des problèmes d'installation de SGLang [#debogage-des-problemes-dinstallation-de-sglang]
    • <h3> hashtag🚚Déploiement des modèles SGLang [#deploiement-des-modeles-sglang]
    • <h3> hashtag🦥Déploiement des finetunes Unsloth dans SGLang [#deploiement-des-finetunes-unsloth-dans-sglang]
    • <h3> hashtag🚃gpt-oss-20b : Guide de déploiement Unsloth & SGLang [#gpt-oss-20b-guide-de-deploiement-unsloth-and-sglang]
    • <h3> hashtag💎Quantification FP8 en ligne [#quantification-fp8-en-ligne]
    • <h3> hashtag⚡Benchmarking SGLang [#benchmarking-sglang]
    • <h3> hashtag🏃Mode interactif hors ligne SGLang [#mode-interactif-hors-ligne-sglang]
    • <h3> hashtag🎇GGUFs dans SGLang [#ggufs-dans-sglang]
    • <h3> hashtag🎬Service GGUF à haut débit avec SGLang [#service-gguf-a-haut-debit-avec-sglang]
1110/docs/fr/notions-de-base/inference-and-deployment/sglang-guide
  • <h1> Leitfaden für SGLang-Bereitstellung & Inferenz
    • <h3> hashtag💻SGLang installieren [#sglang-installieren]
    • <h3> hashtag🐛Fehlerbehebung bei SGLang-Installationsproblemen [#fehlerbehebung-bei-sglang-installationsproblemen]
    • <h3> hashtag🚚SGLang-Modelle bereitstellen [#sglang-modelle-bereitstellen]
    • <h3> hashtag🦥Unsloth-Finetunes in SGLang bereitstellen [#unsloth-finetunes-in-sglang-bereitstellen]
    • <h3> hashtag🚃gpt-oss-20b: Unsloth- & SGLang-Bereitstellungsanleitung [#gpt-oss-20b-unsloth-and-sglang-bereitstellungsanleitung]
    • <h3> hashtag💎FP8 Online-Quantisierung [#fp8-online-quantisierung]
    • <h3> hashtag⚡SGLang-Benchmarking [#sglang-benchmarking]
    • <h3> hashtag🏃SGLang interaktiver Offline-Modus [#sglang-interaktiver-offline-modus]
    • <h3> hashtag🎇GGUFs in SGLang [#ggufs-in-sglang]
    • <h3> hashtag🎬Hoher Durchsatz beim Serving von GGUFs mit SGLang [#hoher-durchsatz-beim-serving-von-ggufs-mit-sglang]
1110/docs/de/grundlagen/inference-and-deployment/sglang-guide
  • <h1> SGLangデプロイ&推論ガイド
    • <h3> hashtag💻SGLangのインストール [#sglangnoinsutru]
    • <h3> hashtag🐛SGLangインストール問題のデバッグ [#sglanginsutrunodebaggu]
    • <h3> hashtag🚚SGLangモデルのデプロイ [#sglangmoderunodepuroi]
    • <h3> hashtag🦥SGLangでのUnslothファインチューンのデプロイ [#sglangdenounslothfainchnnodepuroi]
    • <h3> hashtag🚃gpt-oss-20b: Unsloth & SGLang デプロイガイド [#gpt-oss-20b-unsloth-sglang-depuroigaido]
    • <h3> hashtag💎FP8 オンライン量子化 [#fp8-onrain]
    • <h3> hashtag⚡SGLangのベンチマーク [#sglangnobenchimku]
    • <h3> hashtag🏃SGLang インタラクティブオフラインモード [#sglang-intarakutibuofurainmdo]
    • <h3> hashtag🎇SGLangにおけるGGUF [#sglangniokerugguf]
    • <h3> hashtag🎬SGLangによる高スループットなGGUF提供 [#sglangniyorusurputtonagguf]
1110/docs/jp/ji-ben/inference-and-deployment/sglang-guide
  • <h1> down-left-and-up-right-to-centerQuantization-Aware Training (QAT)
    • <h3> hashtag📚Quantization [#quantization]
    • <h3> hashtag🔥Smarter Quantization [#smarter-quantization]
    • <h3> hashtag🔍Quantization-Aware Training [#quantization-aware-training]
    • <h3> hashtag✨QAT + LoRA finetuning [#qat--lora-finetuning]
    • <h3> hashtag🫖Exporting QAT models [#exporting-qat-models]
    • <h3> hashtag🫖Quantizing models without training [#quantizing-models-without-training]
    • <h3> hashtag📱ExecuTorch - QAT for mobile deployment [#executorch-qat-for-mobile-deployment]
    • <h3> hashtag🌻How to enable QAT [#how-to-enable-qat]
    • <h3> hashtag💁Acknowledgements [#acknowledgements]
109/docs/blog/quantization-aware-training-qat
  • <h1> down-left-and-up-right-to-centerEntraînement sensible à la quantification (QAT)
    • <h3> hashtag📚Quantification [#quantification]
    • <h3> hashtag🔥Quantification plus intelligente [#quantification-plus-intelligente]
    • <h3> hashtag🔍Entraînement sensible à la quantification [#entrainement-sensible-a-la-quantification]
    • <h3> hashtag✨QAT + fine-tuning LoRA [#qat--fine-tuning-lora]
    • <h3> hashtag🫖Exportation des modèles QAT [#exportation-des-modeles-qat]
    • <h3> hashtag🫖Quantifier des modèles sans entraînement [#quantifier-des-modeles-sans-entrainement]
    • <h3> hashtag📱ExecuTorch - QAT pour le déploiement mobile [#executorch-qat-pour-le-deploiement-mobile]
    • <h3> hashtag🌻Comment activer le QAT [#comment-activer-le-qat]
    • <h3> hashtag💁Remerciements [#remerciements]
109/docs/fr/blog/quantization-aware-training-qat
  • <h1> down-left-and-up-right-to-center量化感知训练(QAT)
    • <h3> hashtag📚量化 [#liang-hua]
    • <h3> hashtag🔥更智能的量化 [#geng-zhi-neng-de-liang-hua]
    • <h3> hashtag🔍感知量化训练 [#gan-zhi-liang-hua-xun-lian]
    • <h3> hashtag✨QAT + LoRA 微调 [#qat--lora-wei-tiao]
    • <h3> hashtag🫖导出 QAT 模型 [#dao-chu-qat-mo-xing]
    • <h3> hashtag🫖在不训练的情况下量化模型 [#zai-bu-xun-lian-de-qing-kuang-xia-liang-hua-mo-xing]
    • <h3> hashtag📱ExecuTorch - 面向移动部署的 QAT [#executorch-mian-xiang-yi-dong-bu-shu-de-qat]
    • <h3> hashtag🌻如何启用 QAT [#ru-he-qi-yong-qat]
    • <h3> hashtag💁致谢 [#zhi-xie]
109/docs/zh/bo-ke/quantization-aware-training-qat
  • <h1> down-left-and-up-right-to-centerQuantization-Aware Training (QAT)
    • <h3> hashtag📚Quantisierung [#quantisierung]
    • <h3> hashtag🔥Intelligentere Quantisierung [#intelligentere-quantisierung]
    • <h3> hashtag🔍Quantization-Aware Training [#quantization-aware-training]
    • <h3> hashtag✨QAT + LoRA-Finetuning [#qat--lora-finetuning]
    • <h3> hashtag🫖Exportieren von QAT-Modellen [#exportieren-von-qat-modellen]
    • <h3> hashtag🫖Modelle quantisieren ohne Training [#modelle-quantisieren-ohne-training]
    • <h3> hashtag📱ExecuTorch - QAT für mobile Bereitstellung [#executorch-qat-fur-mobile-bereitstellung]
    • <h3> hashtag🌻Wie man QAT aktiviert [#wie-man-qat-aktiviert]
    • <h3> hashtag💁Danksagungen [#danksagungen]
109/docs/de/blog/quantization-aware-training-qat
  • <h1> down-left-and-up-right-to-center量子化対応学習(QAT)
    • <h3> hashtag📚量子化 [#liang-zi-hua]
    • <h3> hashtag🔥賢い量子化 [#i]
    • <h3> hashtag🔍量子化対応トレーニング [#torningu]
    • <h3> hashtag✨QAT + LoRA ファインチューニング [#qat-lora-fainchningu]
    • <h3> hashtag🫖QATモデルのエクスポート [#qatmoderunoekusupto]
    • <h3> hashtag🫖トレーニングなしでのモデル量子化 [#torningunashidenomoderu]
    • <h3> hashtag📱ExecuTorch - モバイル展開向けのQAT [#executorch-mobairukenoqat]
    • <h3> hashtag🌻QATを有効にする方法 [#qatwonisuru]
    • <h3> hashtag💁謝辞 [#xie-ci]
109/docs/jp/burogu/quantization-aware-training-qat
  • <h1> ⚡3x Faster LLM Training with Unsloth Kernels + Packing
    • <h3> hashtag🥁Fused QK RoPE Triton Kernel with packing [#fused-qk-rope-triton-kernel-with-packing]
    • <h3> hashtag🚃Int64 Indexing for Triton Kernels [#int64-indexing-for-triton-kernels]
    • <h3> hashtag🧮Why is padding needed & mathematical speedup [#why-is-padding-needed-and-mathematical-speedup]
    • <h3> hashtag🎬Padding-Free by Default [#padding-free-by-default]
    • <h3> hashtag♠️Uncontaminated Packing 2-5x faster training [#uncontaminated-packing-2-5x-faster-training]
    • <h3> hashtag🏖️Analysis and Benchmarks [#analysis-and-benchmarks]
    • <h3> hashtag✨How to enable packing? [#how-to-enable-packing]
87/docs/blog/3x-faster-training-packing
  • <h1> hat-chefUnsloth Data Recipes
    • <h3> hashtagHow Data Recipes works [#how-data-recipes-works]
    • <h3> hashtagGet Started [#get-started]
    • <h3> hashtagWhat you build in the editor [#what-you-build-in-the-editor]
    • <h3> hashtagHow references work [#how-references-work]
    • <h3> hashtagWhat happens after? [#what-happens-after]
    • <h3> hashtagCore building blocks [#core-building-blocks]
    • <h3> hashtagValidate, preview and run [#validate-preview-and-run]
87/docs/new/studio/data-recipe
  • <h1> ⚡使用 Unsloth Kernels + Packing 实现 3 倍更快的 LLM 训练
    • <h3> hashtag🥁带有打包的融合 QK RoPE Triton 内核 [#dai-you-da-bao-de-rong-he-qk-rope-triton-nei-he]
    • <h3> hashtag🚃Triton 内核的 Int64 索引 [#triton-nei-he-de-int64-suo-yin]
    • <h3> hashtag🧮为什么需要填充及数学加速原理 [#wei-shen-me-xu-yao-tian-chong-ji-shu-xue-jia-su-yuan-li]
    • <h3> hashtag🎬默认无填充 [#mo-ren-wu-tian-chong]
    • <h3> hashtag♠️未污染打包 2-5x 更快的训练 [#wei-wu-ran-da-bao-25x-geng-kuai-de-xun-lian]
    • <h3> hashtag🏖️分析与基准 [#fen-xi-yu-ji-zhun]
    • <h3> hashtag✨类似地,第 2 张图(上方)绘制了相同运行的损失,这次在 x 轴上以训练步数绘制。注意损失在尺度和趋势上匹配,但打包情况下损失波动更小,因为模型在每个训练步看到更多令牌。 [#lei-si-di-di-2-zhang-tu-shang-fang-hui-zhi-le-xiang-tong-yun-xing-de-sun-shi-zhe-ci-zai-x-zhou-shang]
87/docs/zh/bo-ke/3x-faster-training-packing
  • <h1> ⚡Entraînement LLM 3x plus rapide avec les kernels Unsloth + packing
    • <h3> hashtag🥁Noyau Triton RoPE QK fusionné avec empaquetage [#noyau-triton-rope-qk-fusionne-avec-empaquetage]
    • <h3> hashtag🚃Indexation Int64 pour les noyaux Triton [#indexation-int64-pour-les-noyaux-triton]
    • <h3> hashtag🧮Pourquoi le remplissage est-il nécessaire & accélération mathématique [#pourquoi-le-remplissage-est-il-necessaire-and-acceleration-mathematique]
    • <h3> hashtag🎬Sans remplissage par défaut [#sans-remplissage-par-defaut]
    • <h3> hashtag♠️Empaquetage non contaminé : entraînement 2-5x plus rapide [#empaquetage-non-contamine-entrainement-2-5x-plus-rapide]
    • <h3> hashtag🏖️Analyse et benchmarks [#analyse-et-benchmarks]
    • <h3> hashtag✨Comment activer l'empaquetage ? [#comment-activer-lempaquetage]
87/docs/fr/blog/3x-faster-training-packing
  • <h1> ⚡3x schnelleres LLM-Training mit Unsloth-Kernels + Packing
    • <h3> hashtag🥁Fusionierter QK RoPE Triton-Kernel mit Packing [#fusionierter-qk-rope-triton-kernel-mit-packing]
    • <h3> hashtag🚃Int64-Indexierung für Triton-Kernel [#int64-indexierung-fur-triton-kernel]
    • <h3> hashtag🧮Warum Padding nötig ist & mathematische Beschleunigung [#warum-padding-notig-ist-and-mathematische-beschleunigung]
    • <h3> hashtag🎬Padding-frei per Voreinstellung [#padding-frei-per-voreinstellung]
    • <h3> hashtag♠️Kontaminationsfreies Packing: 2-5x schnelleres Training [#kontaminationsfreies-packing-2-5x-schnelleres-training]
    • <h3> hashtag🏖️Analyse und Benchmarks [#analyse-und-benchmarks]
    • <h3> hashtag✨Ähnlich zeigt das 2. Diagramm (oben) den Loss der gleichen Läufe, diesmal mit Trainingsschritten auf der x-Achse. Beachten Sie, dass die Verluste in Skalierung und Trend übereinstimmen, aber der Loss im gepackten Fall weniger variabel ist, da das Modell mehr Tokens pro Trainingsschritt sieht. [#ahnlich-zeigt-das-2.-diagramm-oben-den-loss-der-gleichen-laufe-diesmal-mit-trainingsschritten-auf-de]
87/docs/de/blog/3x-faster-training-packing
  • <h1> ⚡Unsloth Kernels + PackingでLLM学習を3倍高速化
    • <h3> hashtag🥁パッキング対応の融合QK RoPE Tritonカーネル [#pakkingunoqk-rope-tritonkneru]
    • <h3> hashtag🚃Tritonカーネルのint64インデックス化 [#tritonknerunoint64indekkusu]
    • <h3> hashtag🧮なぜパディングが必要か&数学的な速度向上 [#nazepadingugakana]
    • <h3> hashtag🎬デフォルトでパディング不要 [#deforutodepadingu]
    • <h3> hashtag♠️汚染のないパッキングで2~5倍高速な学習 [#nonaipakkingude25na]
    • <h3> hashtag🏖️分析とベンチマーク [#tobenchimku]
    • <h3> hashtag✨パッキングを有効にするには? [#pakkinguwonisuruniha]
87/docs/jp/burogu/3x-faster-training-packing
  • <h1> hat-chefUnsloth Data Recipes
    • <h3> hashtagComment fonctionnent les Data Recipes [#comment-fonctionnent-les-data-recipes]
    • <h3> hashtagCommencer [#commencer]
    • <h3> hashtagCe que vous construisez dans l’éditeur [#ce-que-vous-construisez-dans-lediteur]
    • <h3> hashtagComment fonctionnent les références [#comment-fonctionnent-les-references]
    • <h3> hashtagQue se passe-t-il ensuite ? [#que-se-passe-t-il-ensuite]
    • <h3> hashtagBlocs de construction essentiels [#blocs-de-construction-essentiels]
    • <h3> hashtagValider, prévisualiser et exécuter [#valider-previsualiser-et-executer]
87/docs/fr/nouveau/studio/data-recipe
  • <h1> hat-chefUnsloth Data Recipes
    • <h3> hashtagData Recipes の仕組み [#data-recipes-nomi]
    • <h3> hashtag始める [#meru]
    • <h3> hashtagエディタで作るもの [#editaderumono]
    • <h3> hashtag参照の仕組み [#nomi]
    • <h3> hashtagその後は? [#sonoha]
    • <h3> hashtag基本構成要素 [#ji-ben-gou-cheng-yao-su]
    • <h3> hashtag検証、プレビュー、実行 [#pureby]
87/docs/jp/xin-ji-neng/studio/data-recipe
  • <h1> hat-chefUnsloth 数据配方
    • <h3> hashtagData Recipes 的工作方式 [#data-recipes-de-gong-zuo-fang-shi]
    • <h3> hashtag开始使用 [#kai-shi-shi-yong]
    • <h3> hashtag你在编辑器中构建什么 [#ni-zai-bian-ji-qi-zhong-gou-jian-shen-me]
    • <h3> hashtag引用如何工作 [#yin-yong-ru-he-gong-zuo]
    • <h3> hashtag之后会发生什么? [#zhi-hou-hui-fa-sheng-shen-me]
    • <h3> hashtag核心构建模块 [#he-xin-gou-jian-mo-kuai]
    • <h3> hashtag验证、预览和运行 [#yan-zheng-yu-lan-he-yun-xing]
87/docs/zh/xin-zeng/studio/data-recipe
  • <h1> hat-chefUnsloth Data Recipes
    • <h3> hashtagSo funktionieren Data Recipes [#so-funktionieren-data-recipes]
    • <h3> hashtagErste Schritte [#erste-schritte]
    • <h3> hashtagWas Sie im Editor erstellen [#was-sie-im-editor-erstellen]
    • <h3> hashtagWie Referenzen funktionieren [#wie-referenzen-funktionieren]
    • <h3> hashtagWas passiert danach? [#was-passiert-danach]
    • <h3> hashtagKernbausteine [#kernbausteine]
    • <h3> hashtagValidieren, Vorschau und Ausführen [#validieren-vorschau-und-ausfuhren]
87/docs/de/neu/studio/data-recipe
  • <h1> 🌠Qwen3-Coder-Next: How to Run Locally
    • <h3> hashtag⚙️ Usage Guide [#usage-guide]
    • <h3> hashtag🖥️ Run Qwen3-Coder-Next [#run-qwen3-coder-next]
    • <h3> hashtag🦙Llama-server serving & deployment [#llama-server-serving-and-deployment]
    • <h3> hashtag👾 OpenAI Codex & Claude Code [#claude-codex]
    • <h3> hashtag🎱 FP8 Qwen3-Coder-Next in vLLM [#fp8-qwen3-coder-next-in-vllm]
    • <h3> hashtag🔧Tool Calling with Qwen3-Coder-Next [#tool-calling-with-qwen3-coder-next]
    • <h2> hashtag📐Benchmarks [#benchmarks]
      • <h3> hashtagGGUF Quantization Benchmarks [#gguf-quantization-benchmarks]
      • <h3> hashtagQwen3-Coder-Next Benchmarks [#qwen3-coder-next-benchmarks]
106/docs/models/qwen3-coder-next
  • <h1> 🌀Reinforcement Learning GRPO with 7x Longer Context
    • <h3> hashtag🎉Getting started [#getting-started]
    • <h3> hashtag🔢Flattened sequence length chunking [#flattened-sequence-length-chunking]
    • <h3> hashtag👻Hidden States Chunking [#hidden-states-chunking]
    • <h3> hashtag🌵Offloading activations for log softmax [#offloading-activations-for-log-softmax]
    • <h3> hashtag✨Configuring parameters: [#configuring-parameters]
    • <h3> hashtag📼vLLM for RL [#vllm-for-rl]
76/docs/get-started/reinforcement-learning-rl-guide/grpo-long-context
  • <h1> vLLM Deployment & Inference Guide
    • <h3> hashtag💻Installing vLLM [#installing-vllm]
    • <h3> hashtag🚚Deploying vLLM models [#deploying-vllm-models]
    • <h3> hashtag🚒vLLM Deployment Server Flags, Engine Arguments & Options [#vllm-deployment-server-flags-engine-arguments-and-options]
    • <h3> hashtag🦥Deploying Unsloth finetunes in vLLM [#deploying-unsloth-finetunes-in-vllm]
    • <h3> hashtagvLLM Engine Arguments [#undefined]
    • <h3> hashtagLoRA Hot Swapping Guide [#undefined-1]
76/docs/basics/inference-and-deployment/vllm-guide
  • <h1> Deploying models to LM Studio
    • <h3> hashtag1) Export to GGUF (from Unsloth) [#id-1-export-to-gguf-from-unsloth]
    • <h3> hashtag2) Import the GGUF into LM Studio [#id-2-import-the-gguf-into-lm-studio]
    • <h3> hashtag3) Load and chat in LM Studio [#id-3-load-and-chat-in-lm-studio]
    • <h3> hashtag4) Serve your fine-tuned model as a local API (OpenAI-compatible) [#id-4-serve-your-fine-tuned-model-as-a-local-api-openai-compatible]
    • <h3> hashtagTroubleshooting [#troubleshooting]
    • <h3> hashtagMore resources [#more-resources]
76/docs/basics/inference-and-deployment/lm-studio
  • <h1> 🌠Qwen3-Coder-Next:如何本地运行
    • <h3> hashtag⚙️ 使用指南 [#shi-yong-zhi-nan]
    • <h3> hashtag🖥️ 运行 Qwen3-Coder-Next [#yun-xing-qwen3codernext]
    • <h3> hashtag🦙 Llama-server 提供服务与部署 [#llamaserver-ti-gong-fu-wu-yu-bu-shu]
    • <h3> hashtag👾 OpenAI Codex & Claude Code [#claude-codex]
    • <h3> hashtag🎱 vLLM 中的 FP8 Qwen3-Coder-Next [#vllm-zhong-de-fp8-qwen3codernext]
    • <h3> hashtag🔧使用 Qwen3-Coder-Next 的工具调用 [#shi-yong-qwen3codernext-de-gong-ju-diao-yong]
    • <h2> hashtag📐基准测试 [#ji-zhun-ce-shi]
      • <h3> hashtagGGUF 量化基准 [#gguf-liang-hua-ji-zhun]
      • <h3> hashtagQwen3-Coder-Next 基准 [#qwen3codernext-ji-zhun]
106/docs/zh/mo-xing/qwen3-coder-next
  • <h1> 🌠Qwen3-Coder-Next: So führst du es lokal aus
    • <h3> hashtag⚙️ Gebrauchsanleitung [#gebrauchsanleitung]
    • <h3> hashtag🖥️ Qwen3-Coder-Next ausführen [#qwen3-coder-next-ausfuhren]
    • <h3> hashtag🦙 Llama-Server Bereitstellung & Deployment [#llama-server-bereitstellung-and-deployment]
    • <h3> hashtag👾 OpenAI Codex & Claude Code [#claude-codex]
    • <h3> hashtag🎱 FP8 Qwen3-Coder-Next in vLLM [#fp8-qwen3-coder-next-in-vllm]
    • <h3> hashtag🔧Tool-Aufrufe mit Qwen3-Coder-Next [#tool-aufrufe-mit-qwen3-coder-next]
    • <h2> hashtag📐Benchmarks [#benchmarks]
      • <h3> hashtagGGUF-Quantisierungs-Benchmarks [#gguf-quantisierungs-benchmarks]
      • <h3> hashtagQwen3-Coder-Next Benchmarks [#qwen3-coder-next-benchmarks]
106/docs/de/modelle/qwen3-coder-next
  • <h1> 🌠Qwen3-Coder-Next: ローカル実行方法
    • <h3> hashtag⚙️ 使用ガイド [#gaido]
    • <h3> hashtag🖥️ Qwen3-Coder-Nextを実行する [#qwen3-coder-nextwosuru]
    • <h3> hashtag🦙Llama-serverのサーブ&デプロイ [#llama-servernosbudepuroi]
    • <h3> hashtag👾 OpenAI Codex & Claude Code [#claude-codex]
    • <h3> hashtag🎱 vLLMでのFP8 Qwen3-Coder-Next [#vllmdenofp8-qwen3-coder-next]
    • <h3> hashtag🔧Qwen3-Coder-Nextでのツール呼び出し [#qwen3-coder-nextdenotsrubishi]
    • <h2> hashtag📐ベンチマーク [#benchimku]
      • <h3> hashtagGGUF量子化ベンチマーク [#ggufbenchimku]
      • <h3> hashtagQwen3-Coder-Next(80B) [#qwen3-coder-next-80b]
106/docs/jp/moderu/qwen3-coder-next
  • <h1> 🌠Qwen3-Coder-Next : comment exécuter localement
    • <h3> hashtag⚙️ Guide d'utilisation [#guide-dutilisation]
    • <h3> hashtag🖥️ Exécuter Qwen3-Coder-Next [#executer-qwen3-coder-next]
    • <h3> hashtag🦙 Service & déploiement Llama-server [#service-and-deploiement-llama-server]
    • <h3> hashtag👾 OpenAI Codex & Claude Code [#claude-codex]
    • <h3> hashtag🎱 FP8 Qwen3-Coder-Next dans vLLM [#fp8-qwen3-coder-next-dans-vllm]
    • <h3> hashtag🔧Appel d'outils avec Qwen3-Coder-Next [#appel-doutils-avec-qwen3-coder-next]
    • <h2> hashtag📐Benchmarks [#benchmarks]
      • <h3> hashtagBenchmarks de quantification GGUF [#benchmarks-de-quantification-gguf]
      • <h3> hashtagBenchmarks de Qwen3-Coder-Next [#benchmarks-de-qwen3-coder-next]
106/docs/fr/modeles/qwen3-coder-next
  • <h1> 🌀强化学习 GRPO,支持 7 倍更长上下文
    • <h3> hashtag🎉入门 [#ru-men]
    • <h3> hashtag🔢扁平化序列长度分块 [#bian-ping-hua-xu-lie-chang-du-fen-kuai]
    • <h3> hashtag👻隐藏状态分块 [#yin-cang-zhuang-tai-fen-kuai]
    • <h3> hashtag🌵为 log softmax 卸载激活 [#wei-log-softmax-xie-zai-ji-huo]
    • <h3> hashtag✨配置参数: [#pei-zhi-can-shu]
    • <h3> hashtag📼用于 RL 的 vLLM [#yong-yu-rl-de-vllm]
76/docs/zh/kai-shi-shi-yong/reinforcement-learning-rl-guide/grpo-long-context
  • <h1> 🌀7倍長いコンテキストでの強化学習GRPO
    • <h3> hashtag🎉始め方 [#me]
    • <h3> hashtag🔢平坦化されたシーケンス長チャンク処理 [#saretashkensuchanku]
    • <h3> hashtag👻隠れ状態のチャンク処理 [#renochanku]
    • <h3> hashtag🌵をご参照ください。 [#wogokudasai]
    • <h3> hashtag✨)、逆伝播はアクティベーションがオフロードされているかどうかに関わらず同じ量のGPUメモリを必要とします。この場合、アクティベーションのオフロードはメモリ使用量を減らさずにわずかなパフォーマンス低下を招くため、利点はありません。 [#haakutibshongaofurdosareteirukadoukaniwarazujinogpumemoriwotoshimasukonoakutibshonnoofurdohamemoriwo]
    • <h3> hashtag📼がチャンク化するかを表します(赤い角括弧の数で表現)。 [#gachankusurukawoshimasuinode]
76/docs/jp/meru/reinforcement-learning-rl-guide/grpo-long-context
  • <h1> 🌀Apprentissage par renforcement GRPO avec un contexte 7 fois plus long
    • <h3> hashtag🎉et bien plus [#et-bien-plus]
    • <h3> hashtag🔢Pour nos benchmarks, nous comparons BF16 GRPO à Hugging Face avec toutes les optimisations activées (tous les kernels de la bibliothèque kernels, Flash Attention 3, kernels de perte en morceaux, etc.) : [#pour-nos-benchmarks-nous-comparons-bf16-grpo-a-hugging-face-avec-toutes-les-optimisations-activees-t]
    • <h3> hashtag👻Nous prenons en charge le softcapping des logits, le scaling de la température et toutes les autres fonctionnalités. [#nous-prenons-en-charge-le-softcapping-des-logits-le-scaling-de-la-temperature-et-toutes-les-autres-f]
    • <h3> hashtag🌵documentation RL avancée [#documentation-rl-avancee]
    • <h3> hashtag✨), la passe backward nécessite la même quantité de mémoire sur le GPU indépendamment du fait que les activations soient déchargées. Étant donné que le déchargement des activations introduit un léger ralentissement des performances sans réduire l'utilisation mémoire dans ce cas, cela n'apporte aucun bénéfice. [#la-passe-backward-necessite-la-meme-quantite-de-memoire-sur-le-gpu-independamment-du-fait-que-les-ac]
    • <h3> hashtag📼découpe de la longueur de séquence (représenté par le nombre de crochets rouges). [#decoupe-de-la-longueur-de-sequence-represente-par-le-nombre-de-crochets-rouges]
76/docs/fr/commencer/reinforcement-learning-rl-guide/grpo-long-context
  • <h1> 🌀Reinforcement Learning GRPO mit 7x längerem Kontext
    • <h3> hashtag🎉Erste Schritte [#erste-schritte]
    • <h3> hashtag🔢Abgeflachte Sequenzlängen-Chunking [#abgeflachte-sequenzlangen-chunking]
    • <h3> hashtag👻Hidden-States-Chunking [#hidden-states-chunking]
    • <h3> hashtag🌵Auslagern von Aktivierungen für Log-Softmax [#auslagern-von-aktivierungen-fur-log-softmax]
    • <h3> hashtag✨Parameter konfigurieren: [#parameter-konfigurieren]
    • <h3> hashtag📼vLLM für RL [#vllm-fur-rl]
76/docs/de/los-gehts/reinforcement-learning-rl-guide/grpo-long-context
  • <h1> vLLM 部署与推理指南
    • <h3> hashtag💻安装 vLLM [#an-zhuang-vllm]
    • <h3> hashtag🚚部署 vLLM 模型 [#bu-shu-vllm-mo-xing]
    • <h3> hashtag🚒vLLM 部署服务器标志、引擎参数与选项 [#vllm-bu-shu-fu-wu-qi-biao-zhi-yin-qing-can-shu-yu-xuan-xiang]
    • <h3> hashtag🦥 在 vLLM 中部署 Unsloth 微调 [#zai-vllm-zhong-bu-shu-unsloth-wei-tiao]
    • <h3> hashtagvLLM 引擎参数 [#undefined]
    • <h3> hashtagLoRA 热切换指南 [#undefined-1]
76/docs/zh/ji-chu-zhi-shi/inference-and-deployment/vllm-guide
  • <h1> Guide de déploiement et d'inférence vLLM
    • <h3> hashtag💻Installation de vLLM [#installation-de-vllm]
    • <h3> hashtag🚚Déploiement des modèles vLLM [#deploiement-des-modeles-vllm]
    • <h3> hashtag🚒Options, arguments et flags du serveur de déploiement vLLM [#options-arguments-et-flags-du-serveur-de-deploiement-vllm]
    • <h3> hashtag🦥Déploiement des finetunes Unsloth dans vLLM [#deploiement-des-finetunes-unsloth-dans-vllm]
    • <h3> hashtagArguments du moteur vLLM [#undefined]
    • <h3> hashtagGuide de permutation à chaud des LoRA [#undefined-1]
76/docs/fr/notions-de-base/inference-and-deployment/vllm-guide
  • <h1> Leitfaden für vLLM-Bereitstellung & Inferenz
    • <h3> hashtag💻vLLM installieren [#vllm-installieren]
    • <h3> hashtag🚚vLLM-Modelle bereitstellen [#vllm-modelle-bereitstellen]
    • <h3> hashtag🚒vLLM Deployment-Server-Flags, Engine-Argumente & Optionen [#vllm-deployment-server-flags-engine-argumente-and-optionen]
    • <h3> hashtag🦥Unsloth-Finetunes in vLLM bereitstellen [#unsloth-finetunes-in-vllm-bereitstellen]
    • <h3> hashtagvLLM-Engine-Argumente [#undefined]
    • <h3> hashtagLeitfaden zum Hot-Swapping von LoRA [#undefined-1]
76/docs/de/grundlagen/inference-and-deployment/vllm-guide
  • <h1> vLLMデプロイ&推論ガイド
    • <h3> hashtag💻vLLM のインストール [#vllm-noinsutru]
    • <h3> hashtag🚚vLLM モデルのデプロイ [#vllm-moderunodepuroi]
    • <h3> hashtag🚒vLLM デプロイメントサーバーのフラグ、エンジン引数とオプション [#vllm-depuroimentosbnofuraguenjintoopushon]
    • <h3> hashtag🦥vLLM での Unsloth ファインチューンのデプロイ [#vllm-deno-unsloth-fainchnnodepuroi]
    • <h3> hashtagvLLMエンジン引数 [#undefined]
    • <h3> hashtagLoRAホットスワッピングガイド [#undefined-1]
76/docs/jp/ji-ben/inference-and-deployment/vllm-guide
  • <h1> Modelle in LM Studio bereitstellen
    • <h3> hashtag1) Exportieren nach GGUF (aus Unsloth) [#id-1-exportieren-nach-gguf-aus-unsloth]
    • <h3> hashtag2) Importieren Sie die GGUF in LM Studio [#id-2-importieren-sie-die-gguf-in-lm-studio]
    • <h3> hashtag3) Laden und chatten in LM Studio [#id-3-laden-und-chatten-in-lm-studio]
    • <h3> hashtag4) Stellen Sie Ihr feinabgestimmtes Modell als lokale API bereit (OpenAI-kompatibel) [#id-4-stellen-sie-ihr-feinabgestimmtes-modell-als-lokale-api-bereit-openai-kompatibel]
    • <h3> hashtagFehlerbehebung [#fehlerbehebung]
    • <h3> hashtagMehr Ressourcen [#mehr-ressourcen]
76/docs/de/grundlagen/inference-and-deployment/lm-studio
  • <h1> 将模型部署到 LM Studio
    • <h3> hashtag1) 导出为 GGUF(从 Unsloth) [#id-1-dao-chu-wei-gguf-cong-unsloth]
    • <h3> hashtag2) 将 GGUF 导入到 LM Studio [#id-2-jiang-gguf-dao-ru-dao-lm-studio]
    • <h3> hashtag3) 在 LM Studio 中加载并聊天 [#id-3-zai-lm-studio-zhong-jia-zai-bing-liao-tian]
    • <h3> hashtag4) 将您微调的模型作为本地 API 提供服务(兼容 OpenAI) [#id-4-jiang-nin-wei-tiao-de-mo-xing-zuo-wei-ben-di-api-ti-gong-fu-wu-jian-rong-openai]
    • <h3> hashtag故障排除 [#gu-zhang-pai-chu]
    • <h3> hashtag更多资源 [#geng-duo-zi-yuan]
76/docs/zh/ji-chu-zhi-shi/inference-and-deployment/lm-studio
  • <h1> Déploiement de modèles dans LM Studio
    • <h3> hashtag1) Exporter en GGUF (depuis Unsloth) [#id-1-exporter-en-gguf-depuis-unsloth]
    • <h3> hashtag2) Importer le GGUF dans LM Studio [#id-2-importer-le-gguf-dans-lm-studio]
    • <h3> hashtag3) Charger et chatter dans LM Studio [#id-3-charger-et-chatter-dans-lm-studio]
    • <h3> hashtag4) Servir votre modèle affiné en tant qu'API locale (compatible OpenAI) [#id-4-servir-votre-modele-affine-en-tant-quapi-locale-compatible-openai]
    • <h3> hashtagDépannage [#depannage]
    • <h3> hashtagPlus de ressources [#plus-de-ressources]
76/docs/fr/notions-de-base/inference-and-deployment/lm-studio
  • <h1> LM Studioへのモデルデプロイ
    • <h3> hashtag1) GGUFにエクスポート(Unslothから) [#id-1-ggufniekusuptounslothkara]
    • <h3> hashtag2) GGUFをLM Studioにインポート [#id-2-ggufwolm-studioniinpto]
    • <h3> hashtag3) LM Studioで読み込み、チャットする [#id-3-lm-studiodemimichattosuru]
    • <h3> hashtag4) ファインチューニングしたモデルをローカルAPI(OpenAI互換)として提供する [#id-4-fainchningushitamoderuworkaruapiopenaitoshitesuru]
    • <h3> hashtagトラブルシューティング [#toraburushtingu]
    • <h3> hashtagさらに情報 [#sarani]
76/docs/jp/ji-ben/inference-and-deployment/lm-studio
  • <h1> box-isometricExport models with Unsloth Studio
    • <h3> hashtagSelect Training Run [#select-training-run]
    • <h3> hashtagSelect Checkpoint [#select-checkpoint]
    • <h3> hashtagExport Methods [#export-methods]
    • <h3> hashtagExport / Save Locally [#export-save-locally]
    • <h3> hashtagPush to Hub [#push-to-hub]
65/docs/new/studio/export
  • <h1> 🔎Fine-tuning Embedding Models with Unsloth Guide
    • <h3> hashtag🦥 Unsloth Features [#unsloth-features]
    • <h3> hashtag🛠️ Fine-tuning Workflow [#fine-tuning-workflow]
    • <h3> hashtag✅ Inference and Deploy Anywhere! [#docs-internal-guid-c10bfa80-7fff-446e-714d-732eebcd72d6]
    • <h3> hashtag📊 Unsloth Benchmarks [#unsloth-benchmarks]
    • <h3> hashtag🔮 Model Support [#model-support]
65/docs/new/embedding-finetuning
  • <h1> 🔊Text-to-Speech (TTS) Fine-tuning Guide
    • <h3> hashtagFine-tuning Notebooks: [#fine-tuning-notebooks]
    • <h3> hashtagChoosing and Loading a TTS Model [#choosing-and-loading-a-tts-model]
    • <h3> hashtagPreparing Your Dataset [#preparing-your-dataset]
    • <h3> hashtagFine-Tuning TTS with Unsloth [#fine-tuning-tts-with-unsloth]
    • <h3> hashtagFine-tuning Voice models vs. Zero-shot voice cloning [#fine-tuning-voice-models-vs.-zero-shot-voice-cloning]
65/docs/basics/text-to-speech-tts-fine-tuning
  • <h1> microchipFine-tuning LLMs with Blackwell, RTX 50 series & Unsloth
    • <h3> hashtagPip install [#pip-install]
    • <h3> hashtagDocker [#docker]
    • <h3> hashtaguv [#uv]
    • <h3> hashtagConda or mamba (Advanced) [#conda-or-mamba-advanced]
    • <h3> hashtagWSL-Specific Notes [#wsl-specific-notes]
65/docs/blog/fine-tuning-llms-with-blackwell-rtx-50-series-and-unsloth
  • <h1> screwdriver-wrenchTool Calling Guide for Local LLMs
    • <h3> hashtag🔨Tool Calling Setup [#tool-calling-setup]
    • <h3> hashtagWriting a story: [#writing-a-story]
    • <h3> hashtagMathematical operations: [#mathematical-operations]
    • <h3> hashtagExecute generated Python code [#execute-generated-python-code]
    • <h3> hashtagExecute arbitrary terminal functions [#execute-arbitrary-terminal-functions]
    • <h2> hashtag🌠 Qwen3-Coder-Next Tool Calling [#qwen3-coder-next-tool-calling]
    • <h2> hashtag⚡ GLM-4.7-Flash + GLM 4.7 Calling [#glm-4.7-flash--glm-4.7-calling]
      • <h3> hashtag📙 Devstral 2 Tool Calling [#devstral-2-tool-calling]
95/docs/basics/tool-calling-guide-for-local-llms
  • <h1> 👁️Vision Fine-tuning
    • <h3> hashtagDisabling Vision / Text-only fine-tuning [#disabling-vision-text-only-fine-tuning]
    • <h3> hashtagVision Data Collator [#vision-data-collator]
    • <h3> hashtagMulti-image training [#multi-image-training]
    • <h3> hashtagDataset for Vision Fine-tuning [#dataset-for-vision-fine-tuning]
    • <h3> hashtag 🔎Training on assistant responses only for vision models, VLMs [#training-on-assistant-responses-only-for-vision-models-vlms]
65/docs/basics/vision-fine-tuning
  • <h1> 🧩NVIDIA Nemotron-3-Super: How To Run Guide
    • <h3> hashtag⚙️ Usage Guide [#usage-guide]
    • <h3> hashtag🖥️ Run Nemotron-3-Super-120B-A12B [#run-nemotron-3-super-120b-a12b]
    • <h3> hashtag🦥 Fine-tuning Nemotron 3 and RL [#fine-tuning-nemotron-3-and-rl]
    • <h3> hashtag🦙Llama-server serving & deployment [#llama-server-serving-and-deployment]
    • <h3> hashtagBenchmarks [#benchmarks]
65/docs/models/nemotron-3/nemotron-3-super
  • <h1> dockerInstall Unsloth via Docker
    • <h3> hashtag⚡ Quickstart [#quickstart]
    • <h3> hashtag📖 Usage Example [#usage-example]
    • <h3> hashtag🦥Why Unsloth Containers? [#why-unsloth-containers]
    • <h3> hashtag⚙️ Advanced Settings [#advanced-settings]
    • <h3> hashtag🔒 Security Notes [#security-notes]
65/docs/get-started/install/docker
  • <h1> Saving to Ollama
    • <h3> hashtagSaving on Google Colab [#saving-on-google-colab]
    • <h3> hashtagExporting to Ollama [#exporting-to-ollama]
    • <h3> hashtagAutomatic Modelfile creation [#automatic-modelfile-creation]
    • <h3> hashtagOllama Inference [#ollama-inference]
    • <h3> hashtagRunning in Unsloth works well, but after exporting & running on Ollama, the results are poor [#running-in-unsloth-works-well-but-after-exporting-and-running-on-ollama-the-results-are-poor]
65/docs/basics/inference-and-deployment/saving-to-ollama
  • <h1> flask-gearQwen3.5 Fine-tuning Guide
    • <h3> hashtagMoE fine-tuning (35B, 122B) [#moe-fine-tuning-35b-122b]
    • <h3> hashtagQuickstart [#quickstart]
    • <h3> hashtagVision fine-tuning [#vision-fine-tuning]
    • <h3> hashtagReinforcement Learning (RL) [#reinforcement-learning-rl]
    • <h3> hashtagSaving / export fine-tuned model [#saving-export-fine-tuned-model]
65/docs/models/qwen3.5/fine-tune
  • <h1> box-isometric使用 Unsloth Studio 导出模型
    • <h3> hashtag选择训练运行 [#xuan-ze-xun-lian-yun-xing]
    • <h3> hashtag选择检查点 [#xuan-ze-jian-cha-dian]
    • <h3> hashtag导出方法 [#dao-chu-fang-fa]
    • <h3> hashtag导出 / 本地保存 [#dao-chu-ben-di-bao-cun]
    • <h3> hashtag推送到 Hub [#tui-song-dao-hub]
65/docs/zh/xin-zeng/studio/export
  • <h1> box-isometricModelle mit Unsloth Studio exportieren
    • <h3> hashtagTrainingslauf auswählen [#trainingslauf-auswahlen]
    • <h3> hashtagCheckpoint auswählen [#checkpoint-auswahlen]
    • <h3> hashtagExportmethoden [#exportmethoden]
    • <h3> hashtagExportieren / lokal speichern [#exportieren-lokal-speichern]
    • <h3> hashtagZum Hub hochladen [#zum-hub-hochladen]
65/docs/de/neu/studio/export
  • <h1> box-isometricExporter des modèles avec Unsloth Studio
    • <h3> hashtagSélectionner une exécution d'entraînement [#selectionner-une-execution-dentrainement]
    • <h3> hashtagSélectionner un checkpoint [#selectionner-un-checkpoint]
    • <h3> hashtagMéthodes d'exportation [#methodes-dexportation]
    • <h3> hashtagExporter / Enregistrer localement [#exporter-enregistrer-localement]
    • <h3> hashtagPublier sur le Hub [#publier-sur-le-hub]
65/docs/fr/nouveau/studio/export
  • <h1> box-isometricUnsloth Studioでモデルを書き出す
    • <h3> hashtag学習実行を選択 [#wo]
    • <h3> hashtagチェックポイントを選択 [#chekkupointowo]
    • <h3> hashtagエクスポート方法 [#ekusupto]
    • <h3> hashtagエクスポート / ローカルに保存 [#ekusupto-rkaruni]
    • <h3> hashtagHub にプッシュ [#hub-nipusshu]
65/docs/jp/xin-ji-neng/studio/export
  • <h1> screwdriver-wrenchGuide d'appel d'outils pour les LLM locaux
    • <h3> hashtag🔨Configuration de Tool Calling [#configuration-de-tool-calling]
    • <h3> hashtagÉcrire une histoire : [#ecrire-une-histoire]
    • <h3> hashtagOpérations mathématiques : [#operations-mathematiques]
    • <h3> hashtagExécuter du code Python généré [#executer-du-code-python-genere]
    • <h3> hashtagExécuter des fonctions arbitraires du terminal [#executer-des-fonctions-arbitraires-du-terminal]
    • <h2> hashtag🌠 Appel d'outil Qwen3-Coder-Next [#appel-doutil-qwen3-coder-next]
    • <h2> hashtag⚡ GLM-4.7-Flash + Appel GLM 4.7 [#glm-4.7-flash--appel-glm-4.7]
      • <h3> hashtag📙 Appel d'outil Devstral 2 [#appel-doutil-devstral-2]
95/docs/fr/notions-de-base/tool-calling-guide-for-local-llms
  • <h1> microchipFine-tuning des LLM avec Blackwell, la série RTX 50 et Unsloth
    • <h3> hashtagInstallation via pip [#installation-via-pip]
    • <h3> hashtagDocker [#docker]
    • <h3> hashtaguv [#uv]
    • <h3> hashtagConda ou mamba (Avancé) [#conda-ou-mamba-avance]
    • <h3> hashtagRemarques spécifiques à WSL [#remarques-specifiques-a-wsl]
65/docs/fr/blog/fine-tuning-llms-with-blackwell-rtx-50-series-and-unsloth
  • <h1> 👁️Fine-tuning de la vision
    • <h3> hashtagDésactivation de la vision / affinage texte uniquement [#desactivation-de-la-vision-affinage-texte-uniquement]
    • <h3> hashtagVision Data Collator [#vision-data-collator]
    • <h3> hashtagEntraînement multi-image [#entrainement-multi-image]
    • <h3> hashtagJeu de données pour l'affinage en vision [#jeu-de-donnees-pour-laffinage-en-vision]
    • <h3> hashtag 🔎Entraîner uniquement sur les réponses de l'assistant pour les modèles de vision, VLMs [#entrainer-uniquement-sur-les-reponses-de-lassistant-pour-les-modeles-de-vision-vlms]
65/docs/fr/notions-de-base/vision-fine-tuning
  • <h1> 🔎Guide de fine-tuning des modèles d'embedding avec Unsloth
    • <h3> hashtag🦥 Fonctionnalités d'Unsloth [#fonctionnalites-dunsloth]
    • <h3> hashtag🛠️ Flux de travail d'affinage [#flux-de-travail-daffinage]
    • <h3> hashtag✅ Inférence et déploiement partout ! [#docs-internal-guid-c10bfa80-7fff-446e-714d-732eebcd72d6]
    • <h3> hashtag📊 Benchmarks Unsloth [#benchmarks-unsloth]
    • <h3> hashtag🔮 Support de modèles [#support-de-modeles]
65/docs/fr/nouveau/embedding-finetuning
  • <h1> 🔊Guide de fine-tuning Text-to-Speech (TTS)
    • <h3> hashtagNotebooks d'affinage : [#notebooks-daffinage]
    • <h3> hashtagChoisir et charger un modèle TTS [#choisir-et-charger-un-modele-tts]
    • <h3> hashtagpas besoin d'un vocodeur séparé – Orpheus produira directement des tokens audio, qui peuvent être décodés en une forme d'onde. [#pas-besoin-dun-vocodeur-separe-orpheus-produira-directement-des-tokens-audio-qui-peuvent-etre-decode]
    • <h3> hashtag(Optionnel) Si multi-locuteur, vous pourriez inclure un token d'ID locuteur dans le texte ou utiliser une approche d'embedding de locuteur séparée, mais cela dépasse ce guide de base (Elise est mono-locuteur). [#optionnel-si-multi-locuteur-vous-pourriez-inclure-un-token-did-locuteur-dans-le-texte-ou-utiliser-un]
    • <h3> hashtagAffinage des modèles vocaux vs clonage vocal zero-shot [#affinage-des-modeles-vocaux-vs-clonage-vocal-zero-shot]
65/docs/fr/notions-de-base/text-to-speech-tts-fine-tuning
  • <h1> 🔎Leitfaden zum Fine-Tuning von Embedding-Modellen mit Unsloth
    • <h3> hashtag🦥 Unsloth-Funktionen [#unsloth-funktionen]
    • <h3> hashtag🛠️ Fine-Tuning-Workflow [#fine-tuning-workflow]
    • <h3> hashtag✅ Inferenz und Deployment überall! [#docs-internal-guid-c10bfa80-7fff-446e-714d-732eebcd72d6]
    • <h3> hashtag📊 Unsloth-Benchmarks [#unsloth-benchmarks]
    • <h3> hashtag🔮 Modellunterstützung [#modellunterstutzung]
65/docs/de/neu/embedding-finetuning
  • <h1> 🔎Unslothガイドを使った埋め込みモデルのファインチューニング
    • <h3> hashtag🦥 Unslothの機能 [#unslothno]
    • <h3> hashtag🛠️ ファインチューニングワークフロー [#fainchninguwkufur]
    • <h3> hashtag✅ どこでも推論とデプロイが可能! [#docs-internal-guid-c10bfa80-7fff-446e-714d-732eebcd72d6]
    • <h3> hashtag📊 Unslothベンチマーク [#unslothbenchimku]
    • <h3> hashtag🔮 モデルサポート [#moderusapto]
65/docs/jp/xin-ji-neng/embedding-finetuning
  • <h1> 🔎使用 Unsloth 指南微调嵌入模型
    • <h3> hashtag🦥 Unsloth 功能 [#unsloth-gong-neng]
    • <h3> hashtag🛠️ 微调工作流 [#wei-tiao-gong-zuo-liu]
    • <h3> hashtag并且 [#docs-internal-guid-c10bfa80-7fff-446e-714d-732eebcd72d6]
    • <h3> hashtagsimilarity = model.similarity(query_embedding, document_embedding) [#similarity-model.similarity-query_embedding-document_embedding]
    • <h3> hashtag对于 16bit LoRA,Unsloth 快 1.2x 到 3.3x: [#dui-yu-16bit-loraunsloth-kuai-1.2x-dao-3.3x]
65/docs/zh/xin-zeng/embedding-finetuning
  • <h1> 🔊文本转语音(TTS)微调指南
    • <h3> hashtag微调 笔记本: [#wei-tiao-bi-ji-ben]
    • <h3> hashtag选择并加载 TTS 模型 [#xuan-ze-bing-jia-zai-tts-mo-xing]
    • <h3> hashtag准备您的数据集 [#zhun-bei-nin-de-shu-ju-ji]
    • <h3> hashtag使用 Unsloth 微调 TTS [#shi-yong-unsloth-wei-tiao-tts]
    • <h3> hashtag微调语音模型 vs. 零样本语音克隆 [#wei-tiao-yu-yin-mo-xing-vs.-ling-yang-ben-yu-yin-ke-long]
65/docs/zh/ji-chu-zhi-shi/text-to-speech-tts-fine-tuning
  • <h1> 🔊Leitfaden zum Fine-Tuning von Text-to-Speech (TTS)
    • <h3> hashtagNotebooks zur Feinabstimmung: [#notebooks-zur-feinabstimmung]
    • <h3> hashtagAuswahl und Laden eines TTS-Modells [#auswahl-und-laden-eines-tts-modells]
    • <h3> hashtagVorbereitung Ihres Datensatzes [#vorbereitung-ihres-datensatzes]
    • <h3> hashtagFeinabstimmung von TTS mit Unsloth [#feinabstimmung-von-tts-mit-unsloth]
    • <h3> hashtagFeinabstimmung von Stimmmodellen vs. Zero-Shot-Stimmenklonen [#feinabstimmung-von-stimmmodellen-vs.-zero-shot-stimmenklonen]
65/docs/de/grundlagen/text-to-speech-tts-fine-tuning
  • <h1> 🔊テキスト読み上げ(TTS)ファインチューニングガイド
    • <h3> hashtagファインチューニング用ノートブック: [#fainchninguntobukku]
    • <h3> hashtagTTSモデルの選択と読み込み [#ttsmoderunotomimi]
    • <h3> hashtagデータセットの準備 [#dtasettono]
    • <h3> hashtagUnslothでのTTSファインチューニング [#unslothdenottsfainchningu]
    • <h3> hashtagボイスモデルのファインチューニング vs ゼロショット音声クローン [#boisumoderunofainchningu-vs-zeroshottokurn]
65/docs/jp/ji-ben/text-to-speech-tts-fine-tuning
  • <h1> microchip使用 Blackwell、RTX 50 系列和 Unsloth 微调 LLM
    • <h3> hashtagPip 安装 [#pip-an-zhuang]
    • <h3> hashtagDocker [#docker]
    • <h3> hashtaguv [#uv]
    • <h3> hashtagConda 或 mamba(高级) [#conda-huo-mamba-gao-ji]
    • <h3> hashtagWSL 特定注意事项 [#wsl-te-ding-zhu-yi-shi-xiang]
65/docs/zh/bo-ke/fine-tuning-llms-with-blackwell-rtx-50-series-and-unsloth
  • <h1> microchipFine-Tuning von LLMs mit Blackwell, RTX 50 Serie & Unsloth
    • <h3> hashtagPip installieren [#pip-installieren]
    • <h3> hashtagDocker [#docker]
    • <h3> hashtaguv [#uv]
    • <h3> hashtagInstallieren Sie beliebige Transformers-Versionen, am besten die neueste. [#installieren-sie-beliebige-transformers-versionen-am-besten-die-neueste]
    • <h3> hashtagTransformers [#transformers]
65/docs/de/blog/fine-tuning-llms-with-blackwell-rtx-50-series-and-unsloth
  • <h1> microchipBlackwell、RTX 50シリーズとUnslothでLLMをファインチューニングする
    • <h3> hashtagPipインストール [#pipinsutru]
    • <h3> hashtagDocker [#docker]
    • <h3> hashtaguv [#uv]
    • <h3> hashtagCondaまたはmamba(上級者向け) [#condamatahamambake]
    • <h3> hashtagWSL固有の注意点 [#wslno]
65/docs/jp/burogu/fine-tuning-llms-with-blackwell-rtx-50-series-and-unsloth
  • <h1> screwdriver-wrench本地 LLM 的工具调用指南
    • <h3> hashtag🔨工具调用设置 [#gong-ju-diao-yong-she-zhi]
    • <h3> hashtag写故事: [#xie-gu-shi]
    • <h3> hashtag数学运算: [#shu-xue-yun-suan]
    • <h3> hashtag执行生成的 Python 代码 [#zhi-xing-sheng-cheng-de-python-dai-ma]
    • <h3> hashtag执行任意终端函数 [#zhi-xing-ren-yi-zhong-duan-han-shu]
    • <h2> hashtag🌠 Qwen3-Coder-Next 工具调用 [#qwen3codernext-gong-ju-diao-yong]
    • <h2> hashtag⚡ GLM-4.7-Flash + GLM 4.7 调用 [#glm4.7flash--glm-4.7-diao-yong]
      • <h3> hashtag📙 Devstral 2 工具调用 [#devstral-2-gong-ju-diao-yong]
95/docs/zh/ji-chu-zhi-shi/tool-calling-guide-for-local-llms
  • <h1> screwdriver-wrenchローカルLLM向けツール呼び出しガイド
    • <h3> hashtag🔨ツール呼び出しのセットアップ [#tsrubishinosettoappu]
    • <h3> hashtag物語を書く: [#woku]
    • <h3> hashtag数学的演算: [#shu-xue-de-yan-suan]
    • <h3> hashtag生成されたPythonコードを実行する [#saretapythonkdowosuru]
    • <h3> hashtag任意のターミナル関数を実行する [#notminaruwosuru]
    • <h2> hashtag🌠 Qwen3-Coder-Nextのツール呼び出し [#qwen3-coder-nextnotsrubishi]
    • <h2> hashtag⚡ GLM-4.7-Flash + GLM 4.7の呼び出し [#glm-47-flash-glm-47nobishi]
      • <h3> hashtag📙 Devstral 2のツール呼び出し [#devstral-2notsrubishi]
95/docs/jp/ji-ben/tool-calling-guide-for-local-llms
  • <h1> screwdriver-wrenchLeitfaden zum Tool Calling für lokale LLMs
    • <h3> hashtag🔨Tool-Calling Einrichtung [#tool-calling-einrichtung]
    • <h3> hashtagEine Geschichte schreiben: [#eine-geschichte-schreiben]
    • <h3> hashtagMathematische Operationen: [#mathematische-operationen]
    • <h3> hashtagGenerierten Python-Code ausführen [#generierten-python-code-ausfuhren]
    • <h3> hashtagBeliebige Terminal-Funktionen ausführen [#beliebige-terminal-funktionen-ausfuhren]
    • <h2> hashtag🌠 Qwen3-Coder-Next Tool Calling [#qwen3-coder-next-tool-calling]
    • <h2> hashtag⚡ GLM-4.7-Flash + GLM 4.7 Calling [#glm-4.7-flash--glm-4.7-calling]
      • <h3> hashtag📙 Devstral 2 Tool Calling [#devstral-2-tool-calling]
95/docs/de/grundlagen/tool-calling-guide-for-local-llms
  • <h1> 👁️视觉微调
    • <h3> hashtag禁用视觉/仅文本微调 [#jin-yong-shi-jue-jin-wen-ben-wei-tiao]
    • <h3> hashtag视觉数据整理器(Data Collator) [#shi-jue-shu-ju-zheng-li-qi-data-collator]
    • <h3> hashtag多图像训练 [#duo-tu-xiang-xun-lian]
    • <h3> hashtag视觉微调的数据集 [#shi-jue-wei-tiao-de-shu-ju-ji]
    • <h3> hashtag 🔎仅对视觉模型(VLMs)训练助理回复 [#jin-dui-shi-jue-mo-xing-vlms-xun-lian-zhu-li-hui-fu]
65/docs/zh/ji-chu-zhi-shi/vision-fine-tuning
  • <h1> 👁️Vision-Fine-Tuning
    • <h3> hashtagDeaktivieren von Vision- / Nur-Text-Feinabstimmung [#deaktivieren-von-vision-nur-text-feinabstimmung]
    • <h3> hashtagVision Data Collator [#vision-data-collator]
    • <h3> hashtagTraining mit mehreren Bildern [#training-mit-mehreren-bildern]
    • <h3> hashtagDatensatz für Vision-Feinabstimmung [#datensatz-fur-vision-feinabstimmung]
    • <h3> hashtag 🔎Training nur auf Assistentenantworten für Vision-Modelle, VLMs [#training-nur-auf-assistentenantworten-fur-vision-modelle-vlms]
65/docs/de/grundlagen/vision-fine-tuning
  • <h1> 👁️Visionファインチューニング
    • <h3> hashtagビジョン/テキストのみのファインチューニングの無効化 [#bijontekisutonominofainchninguno]
    • <h3> hashtagビジョン用データコレータ [#bijondtakorta]
    • <h3> hashtagマルチ画像トレーニング [#maruchitorningu]
    • <h3> hashtagビジョンファインチューニング用データセット [#bijonfainchningudtasetto]
    • <h3> hashtag 🔎ビジョンモデル(VLM)におけるアシスタント応答のみでのトレーニング [#bijonmoderuvlmniokeruashisutantonomidenotorningu]
65/docs/jp/ji-ben/vision-fine-tuning
  • <h1> 🧩NVIDIA Nemotron-3-Super: 実行ガイド
    • <h3> hashtag⚙️ 使用ガイド [#gaido]
    • <h3> hashtag🖥️ Nemotron-3-Super-120B-A12B を実行する [#nemotron-3-super-120b-a12b-wosuru]
    • <h3> hashtag🦥 Nemotron 3 の微調整と強化学習(RL) [#nemotron-3-notorl]
    • <h3> hashtagを参照してください。 [#woshitekudasai]
    • <h3> hashtag2 + 2 は **4** です。 [#id-2-2-ha-4-desu]
65/docs/jp/moderu/nemotron-3/nemotron-3-super
  • <h1> 🧩NVIDIA Nemotron-3-Super : guide d'exécution
    • <h3> hashtag⚙️ Guide d'utilisation [#guide-dutilisation]
    • <h3> hashtag🖥️ Lancer Nemotron-3-Super-120B-A12B [#lancer-nemotron-3-super-120b-a12b]
    • <h3> hashtag🦥 Affinage de Nemotron 3 et RL [#affinage-de-nemotron-3-et-rl]
    • <h3> hashtag🦙Service & déploiement Llama-server [#service-and-deploiement-llama-server]
    • <h3> hashtagBenchmarks [#benchmarks]
65/docs/fr/modeles/nemotron-3/nemotron-3-super
  • <h1> 🧩NVIDIA Nemotron-3-Super:运行指南
    • <h3> hashtag⚙️ 使用指南 [#shi-yong-zhi-nan]
    • <h3> hashtag🖥️ 运行 Nemotron-3-Super-120B-A12B [#yun-xing-nemotron3super120ba12b]
    • <h3> hashtag🦥 对 Nemotron 3 的微调和强化学习(RL) [#dui-nemotron-3-de-wei-tiao-he-qiang-hua-xue-xi-rl]
    • <h3> hashtag🦙Llama-server 提供服务与部署 [#llamaserver-ti-gong-fu-wu-yu-bu-shu]
    • <h3> hashtag基准测试 [#ji-zhun-ce-shi]
65/docs/zh/mo-xing/nemotron-3/nemotron-3-super
  • <h1> 🧩NVIDIA Nemotron-3-Super: Anleitung zum Ausführen
    • <h3> hashtag⚙️ Gebrauchsanleitung [#gebrauchsanleitung]
    • <h3> hashtag🖥️ Nemotron-3-Super-120B-A12B ausführen [#nemotron-3-super-120b-a12b-ausfuhren]
    • <h3> hashtag🦥 Feinabstimmung von Nemotron 3 und RL [#feinabstimmung-von-nemotron-3-und-rl]
    • <h3> hashtag🦙Llama-server Bereitstellung & Deployment [#llama-server-bereitstellung-and-deployment]
    • <h3> hashtagBenchmarks [#benchmarks]
65/docs/de/modelle/nemotron-3/nemotron-3-super
  • <h1> dockerInstaller Unsloth via Docker
    • <h3> hashtag⚡ Démarrage rapide [#demarrage-rapide]
    • <h3> hashtag📖 Exemple d'utilisation [#exemple-dutilisation]
    • <h3> hashtag🦥Pourquoi les conteneurs Unsloth ? [#pourquoi-les-conteneurs-unsloth]
    • <h3> hashtag⚙️ Paramètres avancés [#parametres-avances]
    • <h3> hashtag🔒 Remarques de sécurité [#remarques-de-securite]
65/docs/fr/commencer/install/docker
  • <h1> dockerUnsloth via Docker installieren
    • <h3> hashtag⚡ Schnellstart [#schnellstart]
    • <h3> hashtag📖 Verwendungsbeispiel [#verwendungsbeispiel]
    • <h3> hashtag🦥Warum Unsloth-Container? [#warum-unsloth-container]
    • <h3> hashtag⚙️ Erweiterte Einstellungen [#erweiterte-einstellungen]
    • <h3> hashtag🔒 Sicherheitshinweise [#sicherheitshinweise]
65/docs/de/los-gehts/install/docker
  • <h1> docker通过 Docker 安装 Unsloth
    • <h3> hashtag⚡ 快速上手 [#kuai-su-shang-shou]
    • <h3> hashtag📖 使用示例 [#shi-yong-shi-li]
    • <h3> hashtag🦥 为什么选择 Unsloth 容器? [#wei-shen-me-xuan-ze-unsloth-rong-qi]
    • <h3> hashtag⚙️ 高级设置 [#gao-ji-she-zhi]
    • <h3> hashtag🔒 安全说明 [#an-quan-shuo-ming]
65/docs/zh/kai-shi-shi-yong/install/docker
  • <h1> dockerDockerでUnslothをインストール
    • <h3> hashtag⚡ クイックスタート [#kuikkusutto]
    • <h3> hashtag📖 使用例 [#shi-yong-li]
    • <h3> hashtag🦥 なぜUnslothコンテナなのか? [#nazeunslothkontenananoka]
    • <h3> hashtag⚙️ 高度な設定 [#na-1]
    • <h3> hashtag🔒 セキュリティ注意事項 [#sekyuriti]
65/docs/jp/meru/install/docker
  • <h1> Enregistrement pour Ollama
    • <h3> hashtagEnregistrement sur Google Colab [#enregistrement-sur-google-colab]
    • <h3> hashtagExportation vers Ollama [#exportation-vers-ollama]
    • <h3> hashtagAutomatique Fichier de modèle création [#automatique-fichier-de-modele-creation]
    • <h3> hashtagInference Ollama [#inference-ollama]
    • <h3> hashtagFonctionne bien dans Unsloth, mais après exportation et exécution sur Ollama, les résultats sont médiocres [#fonctionne-bien-dans-unsloth-mais-apres-exportation-et-execution-sur-ollama-les-resultats-sont-medio]
65/docs/fr/notions-de-base/inference-and-deployment/saving-to-ollama
  • <h1> Speichern für Ollama
    • <h3> hashtagSpeichern in Google Colab [#speichern-in-google-colab]
    • <h3> hashtagExport nach Ollama [#export-nach-ollama]
    • <h3> hashtagAutomatisch Modell-Datei Erstellung [#automatisch-modell-datei-erstellung]
    • <h3> hashtagOllama Inference [#ollama-inference]
    • <h3> hashtagDas Ausführen in Unsloth funktioniert gut, aber nach dem Export & Ausführen auf Ollama sind die Ergebnisse schlecht [#das-ausfuhren-in-unsloth-funktioniert-gut-aber-nach-dem-export-and-ausfuhren-auf-ollama-sind-die-erg]
65/docs/de/grundlagen/inference-and-deployment/saving-to-ollama
  • <h1> 保存到 Ollama
    • <h3> hashtag在 Google Colab 上保存 [#zai-google-colab-shang-bao-cun]
    • <h3> hashtag导出到 Ollama [#dao-chu-dao-ollama]
    • <h3> hashtag自动 模型文件 创建 [#zi-dong-mo-xing-wen-jian-chuang-jian]
    • <h3> hashtagOllama 推理 [#ollama-tui-li]
    • <h3> hashtag在 Unsloth 中运行效果不错,但在导出并在 Ollama 上运行后,结果很差 [#zai-unsloth-zhong-yun-xing-xiao-guo-bu-cuo-dan-zai-dao-chu-bing-zai-ollama-shang-yun-xing-hou-jie-gu]
65/docs/zh/ji-chu-zhi-shi/inference-and-deployment/saving-to-ollama
  • <h1> Ollamaへの保存
    • <h3> hashtagGoogle Colabでの保存 [#google-colabdeno]
    • <h3> hashtagOllamaへのエクスポート [#ollamahenoekusupto]
    • <h3> hashtag自動 モデルファイル 作成 [#moderufairu]
    • <h3> hashtagOllama Inference [#ollama-inference]
    • <h3> hashtagUnslothでの実行はうまくいきますが、エクスポートしてOllamaで実行すると結果が悪い [#unslothdenohaumakuikimasugaekusuptoshiteollamadesurutogai]
65/docs/jp/ji-ben/inference-and-deployment/saving-to-ollama
  • <h1> flask-gearQwen3.5 微调指南
    • <h3> hashtagMoE 微调(35B、122B) [#moe-wei-tiao-35b122b]
    • <h3> hashtag多 GPU 指南 [#duo-gpu-zhi-nan]
    • <h3> hashtag加载后,您将附加 LoRA 适配器并以类似上述 SFT 示例的方式进行训练。 [#jia-zai-hou-nin-jiang-fu-jia-lora-shi-pei-qi-bing-yi-lei-si-shang-shu-sft-shi-li-de-fang-shi-jin-xin]
    • <h3> hashtag多图像视觉指南 [#duo-tu-xiang-shi-jue-zhi-nan]
    • <h3> hashtagfast_inference=False, [#fast_inference-false]
65/docs/zh/mo-xing/qwen3.5/fine-tune
  • <h1> flask-gearQwen3.5 Fine-Tuning-Leitfaden
    • <h3> hashtagMoE-Feinabstimmung (35B, 122B) [#moe-feinabstimmung-35b-122b]
    • <h3> hashtagmultiGPU-Leitfaden [#multigpu-leitfaden]
    • <h3> hashtagSobald geladen, fügen Sie LoRA-Adapter hinzu und trainieren ähnlich wie im SFT-Beispiel oben. [#sobald-geladen-fugen-sie-lora-adapter-hinzu-und-trainieren-ahnlich-wie-im-sft-beispiel-oben]
    • <h3> hashtagMulti-Image-Vision-Leitfaden [#multi-image-vision-leitfaden]
    • <h3> hashtagfast_inference=False, [#fast_inference-false]
65/docs/de/modelle/qwen3.5/fine-tune
  • <h1> chart-fftQwen3.5 GGUF Benchmarks
    • <h3> hashtag1) Some tensors are very sensitive to quantization [#id-1-some-tensors-are-very-sensitive-to-quantization]
    • <h3> hashtag2) Imatrix works very well [#id-2-imatrix-works-very-well]
    • <h3> hashtag3) Perplexity & KLD can be misleading [#id-3-perplexity-and-kld-can-be-misleading]
    • <h3> hashtag4) March 5th 2026 Update - more robustness [#id-4-march-5th-2026-update-more-robustness]
    • <h3> hashtagFull Benchmarks [#full-benchmarks]
65/docs/models/qwen3.5/gguf-benchmarks
  • <h1> flask-gearGuide de fine-tuning Qwen3.5
    • <h3> hashtagAffinage MoE (35B, 122B) [#affinage-moe-35b-122b]
    • <h3> hashtagDémarrage rapide [#demarrage-rapide]
    • <h3> hashtagAffinage vision [#affinage-vision]
    • <h3> hashtagApprentissage par renforcement (RL) [#apprentissage-par-renforcement-rl]
    • <h3> hashtagEnregistrement / export du modèle affiné [#enregistrement-export-du-modele-affine]
65/docs/fr/modeles/qwen3.5/fine-tune
  • <h1> flask-gearQwen3.5ファインチューニングガイド
    • <h3> hashtagMoEの微調整(35B、122B) [#moeno35b122b]
    • <h3> hashtagクイックスタート [#kuikkusutto]
    • <h3> hashtagビジョン微調整 [#bijon]
    • <h3> hashtag強化学習(RL) [#qiang-hua-xue-xi-rl]
    • <h3> hashtag微調整済みモデルの保存/エクスポート [#mimoderunoekusupto]
65/docs/jp/moderu/qwen3.5/fine-tune
  • <h1> chart-fftQwen3.5 GGUF 基准测试
    • <h3> hashtag1) 有些张量对量化非常敏感 [#id-1-you-xie-zhang-liang-dui-liang-hua-fei-chang-min-gan]
    • <h3> hashtag2) Imatrix 效果很好 [#id-2-imatrix-xiao-guo-hen-hao]
    • <h3> hashtag3) 困惑度与 KLD 可能具有误导性 [#id-3-kun-huo-du-yu-kld-ke-neng-ju-you-wu-dao-xing]
    • <h3> hashtag4) 2026 年 3 月 5 日 更新 - 更强的鲁棒性 [#id-4-2026-nian-3-yue-5-ri-geng-xin-geng-qiang-de-lu-bang-xing]
    • <h3> hashtag完整基准测试 [#wan-zheng-ji-zhun-ce-shi]
65/docs/zh/mo-xing/qwen3.5/gguf-benchmarks
  • <h1> chart-fftQwen3.5 GGUF-Benchmarks
    • <h3> hashtag1) Einige Tensoren sind sehr empfindlich gegenüber Quantisierung [#id-1-einige-tensoren-sind-sehr-empfindlich-gegenuber-quantisierung]
    • <h3> hashtag2) Imatrix funktioniert sehr gut [#id-2-imatrix-funktioniert-sehr-gut]
    • <h3> hashtag3) Perplexität & KLD können irreführend sein [#id-3-perplexitat-and-kld-konnen-irrefuhrend-sein]
    • <h3> hashtag4) Update vom 5. März 2026 - mehr Robustheit [#id-4-update-vom-5.-marz-2026-mehr-robustheit]
    • <h3> hashtagVolle Benchmarks [#volle-benchmarks]
65/docs/de/modelle/qwen3.5/gguf-benchmarks
  • <h1> chart-fftQwen3.5 GGUFベンチマーク
    • <h3> hashtag1) いくつかのテンソルは量子化に非常に敏感です [#id-1-ikutsukanotensoruhaninidesu]
    • <h3> hashtag2) Imatrix は非常に効果的です [#id-2-imatrix-hanidesu]
    • <h3> hashtag3) 周辺尤度(Perplexity)とKLDは誤解を招くことがある [#id-3-perplexitytokldhawokukotogaaru]
    • <h3> hashtag4) 2026年3月5日アップデート - より高いロバストネス [#id-4-202635appudto-yoriirobasutonesu]
    • <h3> hashtag完全なベンチマーク [#nabenchimku]
65/docs/jp/moderu/qwen3.5/gguf-benchmarks
  • <h1> chart-fftBenchmarks Qwen3.5 GGUF
    • <h3> hashtag1) Certaines tenseurs sont très sensibles à la quantification [#id-1-certaines-tenseurs-sont-tres-sensibles-a-la-quantification]
    • <h3> hashtag2) Imatrix fonctionne très bien [#id-2-imatrix-fonctionne-tres-bien]
    • <h3> hashtag3) La perplexité et la KLD peuvent être trompeuses [#id-3-la-perplexite-et-la-kld-peuvent-etre-trompeuses]
    • <h3> hashtag4) Mise à jour du 5 mars 2026 - plus de robustesse [#id-4-mise-a-jour-du-5-mars-2026-plus-de-robustesse]
    • <h3> hashtagBenchmarks complets [#benchmarks-complets]
65/docs/fr/modeles/qwen3.5/gguf-benchmarks
  • <h1> 🦥Unsloth Docs
    • <h3> hashtag🦥 Why Unsloth? [#why-unsloth]
    • <h3> hashtag⭐ Features [#features]
    • <h3> hashtagQuickstart [#quickstart]
    • <h3> hashtagWhat is Fine-tuning and RL? Why? [#what-is-fine-tuning-and-rl-why]
54/docs
  • <h1> comment-dotsHow to Run models with Unsloth Studio
    • <h3> hashtagUsing Unsloth Studio Chat [#using-unsloth-studio-chat]
    • <h3> hashtagModel Arena [#model-arena]
    • <h3> hashtagAdding Files as Context [#adding-files-as-context]
    • <h3> hashtagUsing old / existing GGUF models [#using-old-existing-gguf-models]
54/docs/new/studio/chat
  • <h1> 📥Unsloth Installation
    • <h3> hashtagMacOS, Linux, WSL: [#macos-linux-wsl]
    • <h3> hashtagWindows PowerShell: [#windows-powershell]
    • <h3> hashtagLaunch Unsloth [#launch-unsloth]
    • <h3> hashtagUpdating Unsloth [#updating-unsloth]
54/docs/get-started/install
  • <h1> 🦥Documentation Unsloth
    • <h3> hashtag🦥 Pourquoi Unsloth ? [#pourquoi-unsloth]
    • <h3> hashtag⭐ Fonctionnalités [#fonctionnalites]
    • <h3> hashtagDémarrage rapide [#demarrage-rapide]
    • <h3> hashtagQu’est-ce que le fine-tuning et le RL ? Pourquoi ? [#quest-ce-que-le-fine-tuning-et-le-rl-pourquoi]
54/docs/fr
  • <h1> dockerHow to Fine-tune LLMs with Unsloth & Docker
    • <h3> hashtag⚡ Step-by-Step Tutorial [#step-by-step-tutorial]
    • <h3> hashtag📖 Usage Example [#usage-example]
    • <h3> hashtag⚙️ Advanced Settings [#advanced-settings]
    • <h3> hashtag🔒 Security Notes [#security-notes]
54/docs/blog/how-to-fine-tune-llms-with-unsloth-and-docker
  • <h1> 🦥Unsloth 文档
    • <h3> hashtag🦥 为什么选择 Unsloth? [#wei-shen-me-xuan-ze-unsloth]
    • <h3> hashtag⭐ 功能 [#gong-neng]
    • <h3> hashtag快速开始 [#kuai-su-kai-shi]
    • <h3> hashtag什么是微调和 RL?为什么要用? [#shen-me-shi-wei-tiao-he-rl-wei-shen-me-yao-yong]
54/docs/zh
  • <h1> 🦥Unslothドキュメント
    • <h3> hashtag🦥 なぜUnslothなのか? [#nazeunslothnanoka]
    • <h3> hashtag⭐ 機能 [#ji-neng]
    • <h3> hashtagクイックスタート [#kuikkusutto]
    • <h3> hashtagファインチューニングとRLとは? なぜ必要なのか? [#fainchningutorltoha-nazenanoka]
54/docs/jp
  • <h1> 🦥Unsloth-Dokumentation
    • <h3> hashtag🦥 Warum Unsloth? [#warum-unsloth]
    • <h3> hashtag⭐ Funktionen [#funktionen]
    • <h3> hashtagSchnellstart [#schnellstart]
    • <h3> hashtagWas ist Fine-Tuning und RL? Warum? [#was-ist-fine-tuning-und-rl-warum]
54/docs/de
  • <h1> comment-dots如何使用 Unsloth Studio 运行模型
    • <h3> hashtag使用 Unsloth Studio Chat [#shi-yong-unsloth-studio-chat]
    • <h3> hashtag模型竞技场 [#mo-xing-jing-ji-chang]
    • <h3> hashtag将文件作为上下文添加 [#jiang-wen-jian-zuo-wei-shang-xia-wen-tian-jia]
    • <h3> hashtag使用旧的 / 现有的 GGUF 模型 [#shi-yong-jiu-de-xian-you-de-gguf-mo-xing]
54/docs/zh/xin-zeng/studio/chat
  • <h1> comment-dotsUnsloth Studioでモデルを実行する方法
    • <h3> hashtagUnsloth Studio Chat の使用 [#unsloth-studio-chat-no]
    • <h3> hashtagモデルアリーナ [#moderuarna]
    • <h3> hashtagファイルをコンテキストとして追加 [#fairuwokontekisutotoshite]
    • <h3> hashtag古い / 既存の GGUF モデルの使用 [#i-no-gguf-moderuno]
54/docs/jp/xin-ji-neng/studio/chat
  • <h1> comment-dotsComment exécuter des modèles avec Unsloth Studio
    • <h3> hashtagUtilisation du chat Unsloth Studio [#utilisation-du-chat-unsloth-studio]
    • <h3> hashtagArène des modèles [#arene-des-modeles]
    • <h3> hashtagAjout de fichiers comme contexte [#ajout-de-fichiers-comme-contexte]
    • <h3> hashtagUtilisation d’anciens modèles GGUF / existants [#utilisation-danciens-modeles-gguf-existants]
54/docs/fr/nouveau/studio/chat
  • <h1> comment-dotsWie man Modelle mit Unsloth Studio ausführt
    • <h3> hashtagMit Unsloth Studio Chat [#mit-unsloth-studio-chat]
    • <h3> hashtagModel-Arena [#model-arena]
    • <h3> hashtagDateien als Kontext hinzufügen [#dateien-als-kontext-hinzufugen]
    • <h3> hashtagVerwendung alter / vorhandener GGUF-Modelle [#verwendung-alter-vorhandener-gguf-modelle]
54/docs/de/neu/studio/chat
  • <h1> appleInstall Unsloth on MacOS
    • <h3> hashtagInstall Unsloth [#install-unsloth]
    • <h3> hashtagLaunch Unsloth [#launch-unsloth]
    • <h3> hashtagUpdating Unsloth [#updating-unsloth]
    • <h3> hashtagUninstall or Reinstall [#uninstall-or-reinstall]
54/docs/get-started/install/mac
  • <h1> infoFine-tuning LLMs on Intel GPUs with Unsloth
    • <h3> hashtagBuild Unsloth with Intel Support [#build-unsloth-with-intel-support]
    • <h3> hashtagWindows Only - Runtime Configurations [#windows-only-runtime-configurations]
    • <h3> hashtagExample 1: QLoRA Fine-tuning with SFT [#example-1-qlora-fine-tuning-with-sft]
    • <h3> hashtagExample 2: Reinforcement Learning GRPO [#example-2-reinforcement-learning-grpo]
    • <h2> hashtagTroubleshooting [#troubleshooting]
      • <h3> hashtagOut of Memory (OOM) Errors [#out-of-memory-oom-errors]
      • <h3> hashtag(Windows Only) Intel Ultra AIPC iGPU Shared Memory [#windows-only-intel-ultra-aipc-igpu-shared-memory]
84/docs/get-started/install/intel
  • <h1> 📥Unsloth-Installation
    • <h3> hashtagMacOS, Linux, WSL: [#macos-linux-wsl]
    • <h3> hashtagWindows PowerShell: [#windows-powershell]
    • <h3> hashtagUnsloth starten [#unsloth-starten]
    • <h3> hashtagUnsloth aktualisieren [#unsloth-aktualisieren]
54/docs/de/los-gehts/install
  • <h1> 📥Unslothのインストール
    • <h3> hashtagMacOS、Linux、WSL: [#macos-linux-wsl]
    • <h3> hashtagWindows PowerShell: [#windows-powershell]
    • <h3> hashtagUnsloth を起動 [#unsloth-wo]
    • <h3> hashtagUnslothの更新 [#unslothno]
54/docs/jp/meru/install
  • <h1> 📥Unsloth 安装
    • <h3> hashtagMacOS、Linux、WSL: [#macos-linux-wsl]
    • <h3> hashtagWindows PowerShell: [#windows-powershell]
    • <h3> hashtag启动 Unsloth [#qi-dong-unsloth]
    • <h3> hashtag更新 Unsloth [#geng-xin-unsloth]
54/docs/zh/kai-shi-shi-yong/install
  • <h1> 📥Installation d'Unsloth
    • <h3> hashtagMacOS, Linux, WSL : [#macos-linux-wsl]
    • <h3> hashtagWindows PowerShell : [#windows-powershell]
    • <h3> hashtagLancer Unsloth [#lancer-unsloth]
    • <h3> hashtagMise à jour d’Unsloth [#mise-a-jour-dunsloth]
54/docs/fr/commencer/install
  • <h1> dockerComment fine-tuner des LLM avec Unsloth et Docker
    • <h3> hashtag⚡ Tutoriel étape par étape [#tutoriel-etape-par-etape]
    • <h3> hashtag📖 Exemple d'utilisation [#exemple-dutilisation]
    • <h3> hashtag⚙️ Paramètres avancés [#parametres-avances]
    • <h3> hashtag🔒 Notes de sécurité [#notes-de-securite]
54/docs/fr/blog/how-to-fine-tune-llms-with-unsloth-and-docker
  • <h1> dockerWie man LLMs mit Unsloth & Docker feinabstimmt
    • <h3> hashtag⚡ Schritt-für-Schritt-Anleitung [#schritt-fur-schritt-anleitung]
    • <h3> hashtag📖 Nutzungsbeispiel [#nutzungsbeispiel]
    • <h3> hashtag⚙️ Erweiterte Einstellungen [#erweiterte-einstellungen]
    • <h3> hashtag🔒 Sicherheitshinweise [#sicherheitshinweise]
54/docs/de/blog/how-to-fine-tune-llms-with-unsloth-and-docker
  • <h1> docker如何使用 Unsloth 和 Docker 微调 LLM
    • <h3> hashtag⚡ 逐步教程 [#zhu-bu-jiao-cheng]
    • <h3> hashtag📖 使用示例 [#shi-yong-shi-li]
    • <h3> hashtag⚙️ 高级设置 [#gao-ji-she-zhi]
    • <h3> hashtag容器默认以非 root [#rong-qi-mo-ren-yi-fei-root]
54/docs/zh/bo-ke/how-to-fine-tune-llms-with-unsloth-and-docker
  • <h1> dockerUnslothとDockerでLLMをファインチューニングする方法
    • <h3> hashtag⚡ ステップバイステップチュートリアル [#suteppubaisuteppuchtoriaru]
    • <h3> hashtag— ユーザーホームディレクトリ [#yzhmudirekutori]
    • <h3> hashtagssh -i ~/.ssh/container_key -p 2222 unsloth@localhost [#ssh-i-.ssh-container_key-p-2222-unsloth-localhost]
    • <h3> hashtag-v <local_folder>:<container_folder> [#v-less-than-local_folder-greater-than-less-than-container_folder-greater-than]
54/docs/jp/burogu/how-to-fine-tune-llms-with-unsloth-and-docker
  • <h1> appleInstaller Unsloth sur macOS
    • <h3> hashtagInstaller Unsloth [#installer-unsloth]
    • <h3> hashtagLancer Unsloth [#lancer-unsloth]
    • <h3> hashtagMise à jour d’Unsloth [#mise-a-jour-dunsloth]
    • <h3> hashtagDésinstaller ou réinstaller [#desinstaller-ou-reinstaller]
54/docs/fr/commencer/install/mac
  • <h1> appleUnsloth auf MacOS installieren
    • <h3> hashtagUnsloth installieren [#unsloth-installieren]
    • <h3> hashtagUnsloth starten [#unsloth-starten]
    • <h3> hashtagUnsloth aktualisieren [#unsloth-aktualisieren]
    • <h3> hashtagDeinstallieren oder neu installieren [#deinstallieren-oder-neu-installieren]
54/docs/de/los-gehts/install/mac
  • <h1> appleMacOSにUnslothをインストール
    • <h3> hashtagUnslothをインストール [#unslothwoinsutru]
    • <h3> hashtagUnslothを起動 [#unslothwo]
    • <h3> hashtagUnslothの更新 [#unslothno]
    • <h3> hashtagアンインストールまたは再インストール [#aninsutrumatahainsutru]
54/docs/jp/meru/install/mac
  • <h1> apple在 MacOS 上安装 Unsloth
    • <h3> hashtag安装 Unsloth [#an-zhuang-unsloth]
    • <h3> hashtag启动 Unsloth [#qi-dong-unsloth]
    • <h3> hashtag更新 Unsloth [#geng-xin-unsloth]
    • <h3> hashtag卸载或重新安装 [#xie-zai-huo-chong-xin-an-zhuang]
54/docs/zh/kai-shi-shi-yong/install/mac
  • <h1> info使用 Unsloth 在 Intel GPU 上微调 LLM
    • <h3> hashtag使用对 Intel 的支持构建 Unsloth [#shi-yong-dui-intel-de-zhi-chi-gou-jian-unsloth]
    • <h3> hashtag仅限 Windows - 运行时配置 [#jin-xian-windows-yun-xing-shi-pei-zhi]
    • <h3> hashtag示例 1:使用 SFT 的 QLoRA 微调 [#shi-li-1-shi-yong-sft-de-qlora-wei-tiao]
    • <h3> hashtag示例 2:强化学习 GRPO [#shi-li-2-qiang-hua-xue-xi-grpo]
    • <h2> hashtag故障排除 [#gu-zhang-pai-chu]
      • <h3> hashtag内存不足(OOM)错误 [#nei-cun-bu-zu-oom-cuo-wu]
      • <h3> hashtag(仅限 Windows)Intel Ultra AIPC iGPU 共享内存 [#jin-xian-windowsintel-ultra-aipc-igpu-gong-xiang-nei-cun]
84/docs/zh/kai-shi-shi-yong/install/intel
  • <h1> infoUnslothを使ったIntel GPUでのLLMファインチューニング
    • <h3> hashtagIntelサポート付きでUnslothをビルドする [#intelsaptokideunslothwobirudosuru]
    • <h3> hashtagWindowsのみ - ランタイム構成 [#windowsnomi-rantaimu]
    • <h3> hashtag例1: SFTを用いたQLoRAファインチューニング [#id-1-sftwoitaqlorafainchningu]
    • <h3> hashtag例2: 強化学習 GRPO [#li-2-qiang-hua-xue-xi-grpo]
    • <h2> hashtagトラブルシューティング [#toraburushtingu]
      • <h3> hashtagメモリ不足(OOM)エラー [#memorioomer]
      • <h3> hashtag(Windowsのみ) Intel Ultra AIPC iGPU 共有メモリ [#windowsnomi-intel-ultra-aipc-igpu-memori]
84/docs/jp/meru/install/intel
  • <h1> infoFine-tuning des LLM sur GPU Intel avec Unsloth
    • <h3> hashtagConstruire Unsloth avec le support Intel [#construire-unsloth-avec-le-support-intel]
    • <h3> hashtagWindows uniquement - Configurations d'exécution [#windows-uniquement-configurations-dexecution]
    • <h3> hashtagExemple 1 : Affinage QLoRA avec SFT [#exemple-1-affinage-qlora-avec-sft]
    • <h3> hashtagExemple 2 : Apprentissage par renforcement GRPO [#exemple-2-apprentissage-par-renforcement-grpo]
    • <h2> hashtagDépannage [#depannage]
      • <h3> hashtagErreurs de manque de mémoire (OOM) [#erreurs-de-manque-de-memoire-oom]
      • <h3> hashtag(Windows uniquement) Mémoire partagée iGPU Intel Ultra AIPC [#windows-uniquement-memoire-partagee-igpu-intel-ultra-aipc]
84/docs/fr/commencer/install/intel
  • <h1> infoFine-Tuning von LLMs auf Intel-GPUs mit Unsloth
    • <h3> hashtagUnsloth mit Intel-Unterstützung erstellen [#unsloth-mit-intel-unterstutzung-erstellen]
    • <h3> hashtagNur Windows - Laufzeitkonfigurationen [#nur-windows-laufzeitkonfigurationen]
    • <h3> hashtagBeispiel 1: QLoRA-Feinabstimmung mit SFT [#beispiel-1-qlora-feinabstimmung-mit-sft]
    • <h3> hashtagBeispiel 2: Verstärkendes Lernen GRPO [#beispiel-2-verstarkendes-lernen-grpo]
    • <h2> hashtagFehlerbehebung [#fehlerbehebung]
      • <h3> hashtagOut-of-Memory (OOM)-Fehler [#out-of-memory-oom-fehler]
      • <h3> hashtag(Nur Windows) Intel Ultra AIPC iGPU Shared Memory [#nur-windows-intel-ultra-aipc-igpu-shared-memory]
84/docs/de/los-gehts/install/intel
  • <h1> 💎Fine-tune MoE Models 12x Faster with Unsloth
    • <h3> hashtag🦥 Unsloth MoE Triton Kernels [#unsloth-moe-triton-kernels]
    • <h3> hashtag🧭 Automatic backend selection [#automatic-backend-selection]
    • <h3> hashtag❓What is torch._grouped_mm? [#what-is-torch._grouped_mm]
    • <h2> hashtag📊 Kernel Results + Benchmarks [#kernel-results--benchmarks]
      • <h3> hashtaggpt-oss Benchmarks [#gpt-oss-benchmarks]
      • <h3> hashtagQwen3 Benchmarks [#qwen3-benchmarks]
      • <h3> hashtagGLM 4.7 Benchmarks [#glm-4.7-benchmarks]
      • <h3> hashtag⚡Faster LoRA MoE training [#faster-lora-moe-training]
    • <h2> hashtag📚 Details of implementation [#details-of-implementation]
      • <h3> hashtag🔮 Model Support [#model-support]
      • <h3> hashtag📈 More Benchmarks [#more-benchmarks]
    • <h2> hashtag🎉 Important Unsloth Updates [#important-unsloth-updates]
      • <h3> hashtagAcknowledgements [#acknowledgements]
143/docs/new/faster-moe
  • <h1> ruler-combined500K Context Length Fine-tuning
    • <h3> hashtag📐 Unsloth Loss Refactoring: Chunk & Fuse [#unsloth-loss-refactoring-chunk-and-fuse]
    • <h3> hashtag🏁 Unsloth Gradient Checkpointing Enhancements [#unsloth-gradient-checkpointing-enhancements]
    • <h3> hashtag🔓 Tiled MLP: Unlocking 500K+ [#tiled-mlp-unlocking-500k]
43/docs/blog/500k-context-length-fine-tuning
  • <h1> sparkleFine-tuning LLMs with NVIDIA DGX Spark and Unsloth
    • <h3> hashtag⚡ Step-by-Step Tutorial [#step-by-step-tutorial]
    • <h3> hashtagUnified Memory Usage [#unified-memory-usage]
    • <h3> hashtagVideo Tutorials [#video-tutorials]
43/docs/blog/fine-tuning-llms-with-nvidia-dgx-spark-and-unsloth
  • <h1> arrow-rotate-rightUpdating Unsloth
    • <h3> hashtagStandard Updating: [#standard-updating]
    • <h3> hashtagUpdating without dependency updates: [#updating-without-dependency-updates]
    • <h3> hashtagTo use an old version of Unsloth: [#to-use-an-old-version-of-unsloth]
43/docs/get-started/install/updating
  • <h1> 💎UnslothでMoEモデルを12倍高速にファインチューニング
    • <h3> hashtag🦥 Unsloth MoE Tritonカーネル [#unsloth-moe-tritonkneru]
    • <h3> hashtag🧭 自動バックエンド選択 [#bakkuendo]
    • <h3> hashtag❓torch._grouped_mmとは? [#torchgroupedmmtoha]
    • <h2> hashtag📊 カーネル結果とベンチマーク [#knerutobenchimku]
      • <h3> hashtaggpt-ossベンチマーク [#gpt-ossbenchimku]
      • <h3> hashtagQwen3ベンチマーク [#qwen3benchimku]
      • <h3> hashtagGLM 4.7 ベンチマーク [#glm-47-benchimku]
      • <h3> hashtag⚡より高速なLoRA MoEトレーニング [#yorinalora-moetorningu]
    • <h2> hashtag📚 実装の詳細 [#no]
      • <h3> hashtag🔮 サポートモデル [#saptomoderu]
      • <h3> hashtag📈 さらに多くのベンチマーク [#saranikunobenchimku]
    • <h2> hashtag🎉 重要なUnslothアップデート [#naunslothappudto]
      • <h3> hashtag謝辞 [#xie-ci]
143/docs/jp/xin-ji-neng/faster-moe
  • <h1> 💎使用 Unsloth 将 MoE 模型微调速度提升 12 倍
    • <h3> hashtag🦥 Unsloth MoE Triton 内核 [#unsloth-moe-triton-nei-he]
    • <h3> hashtag🧭 自动后端选择 [#zi-dong-hou-duan-xuan-ze]
    • <h3> hashtag❓什么是 torch._grouped_mm? [#shen-me-shi-torch.groupedmm]
    • <h2> hashtag📊 内核结果与基准测试 [#nei-he-jie-guo-yu-ji-zhun-ce-shi]
      • <h3> hashtaggpt-oss 基准测试 [#gptoss-ji-zhun-ce-shi]
      • <h3> hashtagQwen3 基准测试 [#qwen3-ji-zhun-ce-shi]
      • <h3> hashtagGLM 4.7 基准测试 [#glm-4.7-ji-zhun-ce-shi]
      • <h3> hashtag⚡更快的 LoRA MoE 训练 [#geng-kuai-de-lora-moe-xun-lian]
    • <h2> hashtagLoRA 是一种参数高效的微调方法:你不是更新完整的权重矩阵,而是训练一个低秩的“适配器”,参数远少,从而大幅减少优化器的内存需求。 [#lora-shi-yi-zhong-can-shu-gao-xiao-de-wei-tiao-fang-fa-ni-bu-shi-geng-xin-wan-zheng-de-quan-zhong-ju]
      • <h3> hashtag🔮 模型支持 [#mo-xing-zhi-chi]
      • <h3> hashtag📈 更多基准测试 [#geng-duo-ji-zhun-ce-shi]
    • <h2> hashtag🎉 重要的 Unsloth 更新 [#zhong-yao-de-unsloth-geng-xin]
      • <h3> hashtag致谢 [#zhi-xie]
143/docs/zh/xin-zeng/faster-moe
  • <h1> 💎MoE-Modelle mit Unsloth 12x schneller feinabstimmen
    • <h3> hashtag🦥 Unsloth MoE Triton-Kernels [#unsloth-moe-triton-kernels]
    • <h3> hashtag🧭 Automatische Backend-Auswahl [#automatische-backend-auswahl]
    • <h3> hashtag❓Was ist torch._grouped_mm? [#was-ist-torch._grouped_mm]
    • <h2> hashtag📊 Kernel-Ergebnisse + Benchmarks [#kernel-ergebnisse--benchmarks]
      • <h3> hashtaggpt-oss Benchmarks [#gpt-oss-benchmarks]
      • <h3> hashtagQwen3 Benchmarks [#qwen3-benchmarks]
      • <h3> hashtagGLM 4.7 Benchmarks [#glm-4.7-benchmarks]
      • <h3> hashtag⚡Schnelleres LoRA MoE-Training [#schnelleres-lora-moe-training]
    • <h2> hashtag📚 Einzelheiten zur Implementierung [#einzelheiten-zur-implementierung]
      • <h3> hashtag🔮 Modellunterstützung [#modellunterstutzung]
      • <h3> hashtag📈 Mehr Benchmarks [#mehr-benchmarks]
    • <h2> hashtag🎉 Wichtige Unsloth-Updates [#wichtige-unsloth-updates]
      • <h3> hashtagDanksagungen [#danksagungen]
143/docs/de/neu/faster-moe
  • <h1> 💎Fine-tunez des modèles MoE 12 fois plus vite avec Unsloth
    • <h3> hashtag🦥 Noyaux Triton MoE Unsloth [#noyaux-triton-moe-unsloth]
    • <h3> hashtag🧭 Sélection automatique du backend [#selection-automatique-du-backend]
    • <h3> hashtag❓Qu'est-ce que torch._grouped_mm ? [#quest-ce-que-torch._grouped_mm]
    • <h2> hashtag📊 Résultats des noyaux + Benchmarks [#resultats-des-noyaux--benchmarks]
      • <h3> hashtagBenchmarks gpt-oss [#benchmarks-gpt-oss]
      • <h3> hashtagBenchmarks Qwen3 [#benchmarks-qwen3]
      • <h3> hashtagBenchmarks GLM 4.7 [#benchmarks-glm-4.7]
      • <h3> hashtag⚡Entraînement LoRA MoE plus rapide [#entrainement-lora-moe-plus-rapide]
    • <h2> hashtagLoRA est une méthode de fine-tuning économe en paramètres : au lieu de mettre à jour la matrice de poids complète, vous entraînez un “adaptateur” de faible rang avec beaucoup moins de paramètres, ce qui réduit drastiquement la mémoire de l'optimiseur. [#lora-est-une-methode-de-fine-tuning-econome-en-parametres-au-lieu-de-mettre-a-jour-la-matrice-de-poi]
      • <h3> hashtag(Thinking and Instruct) : VL • 2507 • Coder [#thinking-and-instruct-vl-2507-coder]
      • <h3> hashtagVitesse d'entraînement y compris vs Transformers v4 [#vitesse-dentrainement-y-compris-vs-transformers-v4]
    • <h2> hashtag🎉 Gemma-3 utilise désormais Flex-Attention [#gemma-3-utilise-desormais-flex-attention]
      • <h3> hashtagNous remercions également sincèrement l'équipe torchao, en particulier Vasily Kuznetsov (vkuzo) pour son aide à activer le support grouped_mm pour float16 afin de le faire fonctionner sur T4 et la rétrocompatibilité avec A100. [#nous-remercions-egalement-sincerement-lequipe-torchao-en-particulier-vasily-kuznetsov-vkuzo-pour-son]
143/docs/fr/nouveau/faster-moe
  • <h1> Troubleshooting Inference
    • <h3> hashtagRunning in Unsloth works well, but after exporting & running on other platforms, the results are poor [#running-in-unsloth-works-well-but-after-exporting-and-running-on-other-platforms-the-results-are-poo]
    • <h3> hashtagSaving to safetensors, not bin format in Colab [#saving-to-safetensors-not-bin-format-in-colab]
    • <h3> hashtagIf saving to GGUF or vLLM 16bit crashes [#if-saving-to-gguf-or-vllm-16bit-crashes]
43/docs/basics/inference-and-deployment/troubleshooting-inference
  • <h1> Saving to GGUF
    • <h3> hashtagRunning in Unsloth works well, but after exporting & running on other platforms, the results are poor [#running-in-unsloth-works-well-but-after-exporting-and-running-on-other-platforms-the-results-are-poo]
    • <h3> hashtagSaving to GGUF / vLLM 16bit crashes [#saving-to-gguf-vllm-16bit-crashes]
    • <h3> hashtagHow do I manually save to GGUF? [#how-do-i-manually-save-to-gguf]
43/docs/basics/inference-and-deployment/saving-to-gguf
  • <h1> ruler-combined50万コンテキスト長ファインチューニング
    • <h3> hashtagによりコンテキストを2倍多く可能にします。 [#niyorikontekisutowo2kunishimasu]
    • <h3> hashtagより小さなコンテキストはより多くのVRAMを使用します [#yorisanakontekisutohayorikunovramwoshimasu]
    • <h3> hashtagtorch.autograd.backward(output, dY) [#torch.autograd.backward-output-dy]
43/docs/jp/burogu/500k-context-length-fine-tuning
  • <h1> ruler-combined50 万上下文长度微调
    • <h3> hashtag📐 Unsloth 损失重构:分块与融合 [#unsloth-sun-shi-zhong-gou-fen-kuai-yu-rong-he]
    • <h3> hashtag🏁 Unsloth 梯度检查点增强 [#unsloth-ti-du-jian-cha-dian-zeng-qiang]
    • <h3> hashtag🔓 切片(Tiled)MLP:解锁 500K+ [#qie-pian-tiledmlp-jie-suo-500k]
43/docs/zh/bo-ke/500k-context-length-fine-tuning
  • <h1> ruler-combinedFine-Tuning mit 500K Kontextlänge
    • <h3> hashtag📐 Unsloth Loss-Refactoring: Chunk & Fuse [#unsloth-loss-refactoring-chunk-and-fuse]
    • <h3> hashtag🏁 Unsloth Gradient Checkpointing Verbesserungen [#unsloth-gradient-checkpointing-verbesserungen]
    • <h3> hashtag🔓 Tiled MLP: Freischaltung von 500K+ [#tiled-mlp-freischaltung-von-500k]
43/docs/de/blog/500k-context-length-fine-tuning
  • <h1> ruler-combinedFine-tuning avec longueur de contexte de 500K
    • <h3> hashtag📐 Refactoring du Loss d'Unsloth : Découper & Fusionner [#refactoring-du-loss-dunsloth-decouper-and-fusionner]
    • <h3> hashtag🏁 Améliorations du Gradient Checkpointing d'Unsloth [#ameliorations-du-gradient-checkpointing-dunsloth]
    • <h3> hashtag🔓 Tiled MLP : Déverrouiller 500K+ [#tiled-mlp-deverrouiller-500k]
43/docs/fr/blog/500k-context-length-fine-tuning
  • <h1> vscodeHow to Fine-tune LLMs in VS Code with Unsloth & Colab GPUs
    • <h3> hashtagVS Code and Colab Tutorial: [#vs-code-and-colab-tutorial]
    • <h3> hashtagVideo Tutorial [#video-tutorial]
    • <h3> hashtagTroubleshooting [#troubleshooting]
43/docs/get-started/install/vs-code
  • <h1> sparkleFine-tuning des LLM avec NVIDIA DGX Spark et Unsloth
    • <h3> hashtag⚡ Tutoriel étape par étape [#tutoriel-etape-par-etape]
    • <h3> hashtagUtilisation de la mémoire unifiée [#utilisation-de-la-memoire-unifiee]
    • <h3> hashtagTutoriels vidéo [#tutoriels-video]
43/docs/fr/blog/fine-tuning-llms-with-nvidia-dgx-spark-and-unsloth
  • <h1> sparkle使用 NVIDIA DGX Spark 和 Unsloth 微调 LLM
    • <h3> hashtag⚡ 逐步教程 [#zhu-bu-jiao-cheng]
    • <h3> hashtag统一内存使用情况 [#tong-yi-nei-cun-shi-yong-qing-kuang]
    • <h3> hashtag视频教程 [#shi-pin-jiao-cheng]
43/docs/zh/bo-ke/fine-tuning-llms-with-nvidia-dgx-spark-and-unsloth
  • <h1> sparkleNVIDIA DGX SparkとUnslothでLLMをファインチューニングする
    • <h3> hashtag⚡ ステップバイステップチュートリアル [#suteppubaisuteppuchtoriaru]
    • <h3> hashtagユニファイドメモリの使用状況 [#yunifaidomemorino]
    • <h3> hashtagビデオチュートリアル [#bideochtoriaru]
43/docs/jp/burogu/fine-tuning-llms-with-nvidia-dgx-spark-and-unsloth
  • <h1> sparkleFine-Tuning von LLMs mit NVIDIA DGX Spark und Unsloth
    • <h3> hashtag⚡ Schritt-für-Schritt-Anleitung [#schritt-fur-schritt-anleitung]
    • <h3> hashtagUnified Memory-Verwendung [#unified-memory-verwendung]
    • <h3> hashtagVideo-Tutorials [#video-tutorials]
43/docs/de/blog/fine-tuning-llms-with-nvidia-dgx-spark-and-unsloth
  • <h1> arrow-rotate-right更新 Unsloth
    • <h3> hashtag标准更新: [#biao-zhun-geng-xin]
    • <h3> hashtag更新但不更新依赖项: [#geng-xin-dan-bu-geng-xin-yi-lai-xiang]
    • <h3> hashtag要使用旧版本的 Unsloth: [#yao-shi-yong-jiu-ban-ben-de-unsloth]
43/docs/zh/kai-shi-shi-yong/install/updating
  • <h1> arrow-rotate-rightMise à jour d'Unsloth
    • <h3> hashtagMise à jour standard : [#mise-a-jour-standard]
    • <h3> hashtagMise à jour sans mise à jour des dépendances : [#mise-a-jour-sans-mise-a-jour-des-dependances]
    • <h3> hashtagPour utiliser une ancienne version d'Unsloth : [#pour-utiliser-une-ancienne-version-dunsloth]
43/docs/fr/commencer/install/updating
  • <h1> arrow-rotate-rightUnsloth aktualisieren
    • <h3> hashtagStandardaktualisierung: [#standardaktualisierung]
    • <h3> hashtagAktualisierung ohne Abhängigkeitsaktualisierungen: [#aktualisierung-ohne-abhangigkeitsaktualisierungen]
    • <h3> hashtagUm eine ältere Version von Unsloth zu verwenden: [#um-eine-altere-version-von-unsloth-zu-verwenden]
43/docs/de/los-gehts/install/updating
  • <h1> arrow-rotate-rightUnslothの更新
    • <h3> hashtag標準のアップデート: [#noappudto]
    • <h3> hashtag依存関係を更新せずにアップデート: [#wosezuniappudto]
    • <h3> hashtag古いバージョンのUnslothを使用するには: [#ibjonnounslothwosuruniha]
43/docs/jp/meru/install/updating
  • <h1> ⁉️FP16 vs BF16 for RL
    • <h3> hashtagFloat16 vs Bfloat16 [#float16-vs-bfloat16]
    • <h3> hashtag🤯A100 Cascade Attention Bug [#a100-cascade-attention-bug]
    • <h3> hashtag🔥Using float16 in Unsloth RL [#using-float16-in-unsloth-rl]
43/docs/get-started/reinforcement-learning-rl-guide/advanced-rl-documentation/fp16-vs-bf16-for-rl
  • <h1> Fehlerbehebung bei der Inferenz
    • <h3> hashtagDas Ausführen in Unsloth funktioniert gut, aber nach dem Export und dem Ausführen auf anderen Plattformen sind die Ergebnisse schlecht [#das-ausfuhren-in-unsloth-funktioniert-gut-aber-nach-dem-export-und-dem-ausfuhren-auf-anderen-plattfo]
    • <h3> hashtagSpeichern in safetensors, nicht bin Format in Colab [#speichern-in-safetensors-nicht-bin-format-in-colab]
    • <h3> hashtagWenn das Speichern in GGUF oder vLLM 16bit abstürzt [#wenn-das-speichern-in-gguf-oder-vllm-16bit-absturzt]
43/docs/de/grundlagen/inference-and-deployment/troubleshooting-inference
  • <h1> Dépannage de l'inférence
    • <h3> hashtagExécuter dans Unsloth fonctionne bien, mais après exportation et exécution sur d'autres plates-formes, les résultats sont médiocres [#executer-dans-unsloth-fonctionne-bien-mais-apres-exportation-et-execution-sur-dautres-plates-formes]
    • <h3> hashtagEnregistrement dans safetensors, pas bin format dans Colab [#enregistrement-dans-safetensors-pas-bin-format-dans-colab]
    • <h3> hashtagSi l'enregistrement au format GGUF ou vLLM 16 bits plante [#si-lenregistrement-au-format-gguf-ou-vllm-16-bits-plante]
43/docs/fr/notions-de-base/inference-and-deployment/troubleshooting-inference
  • <h1> 推理故障排查
    • <h3> hashtag在 Unsloth 上运行效果良好,但导出并在其他平台上运行后,结果很差 [#zai-unsloth-shang-yun-xing-xiao-guo-liang-hao-dan-dao-chu-bing-zai-qi-ta-ping-tai-shang-yun-xing-hou]
    • <h3> hashtag保存为 safetensors,而不是 bin 格式(在 Colab 中) [#bao-cun-wei-safetensors-er-bu-shi-bin-ge-shi-zai-colab-zhong]
    • <h3> hashtag如果保存为 GGUF 或 vLLM 16 位导致崩溃 [#ru-guo-bao-cun-wei-gguf-huo-vllm-16-wei-dao-zhi-beng-kui]
43/docs/zh/ji-chu-zhi-shi/inference-and-deployment/troubleshooting-inference
  • <h1> 推論のトラブルシューティング
    • <h3> hashtagUnslothでの実行はうまくいきますが、エクスポートして他のプラットフォームで実行すると結果が悪い [#unslothdenohaumakuikimasugaekusuptoshitenopurattofmudesurutogai]
    • <h3> hashtagに保存しています safetensors、ではなく bin Colabでのフォーマット [#nishiteimasu-safetensorsdehanaku-bin-colabdenofmatto]
    • <h3> hashtagGGUFやvLLMの16bitで保存中にクラッシュする場合 [#ggufyavllmno16bitdenikurasshusuru]
43/docs/jp/ji-ben/inference-and-deployment/troubleshooting-inference
  • <h1> GGUFへの保存
    • <h3> hashtagUnsloth での実行はうまくいくが、エクスポートして他のプラットフォームで実行すると結果が悪い [#unsloth-denohaumakuikugaekusuptoshitenopurattofmudesurutogai]
    • <h3> hashtagGGUF / vLLM 16bit への保存でクラッシュする [#gguf-vllm-16bit-henodekurasshusuru]
    • <h3> hashtagGGUF に手動で保存するにはどうすればよいですか? [#gguf-nidesurunihadousurebayoidesuka]
43/docs/jp/ji-ben/inference-and-deployment/saving-to-gguf
  • <h1> 保存为 GGUF
    • <h3> hashtag在 Unsloth 中运行效果良好,但导出并在其他平台上运行后,结果很差 [#zai-unsloth-zhong-yun-xing-xiao-guo-liang-hao-dan-dao-chu-bing-zai-qi-ta-ping-tai-shang-yun-xing-hou]
    • <h3> hashtag保存为 GGUF / vLLM 16bit 崩溃 [#bao-cun-wei-gguf-vllm-16bit-beng-kui]
    • <h3> hashtag如何手动保存为 GGUF? [#ru-he-shou-dong-bao-cun-wei-gguf]
43/docs/zh/ji-chu-zhi-shi/inference-and-deployment/saving-to-gguf
  • <h1> Speichern als GGUF
    • <h3> hashtagDas Ausführen in Unsloth funktioniert gut, aber nach dem Exportieren und Ausführen auf anderen Plattformen sind die Ergebnisse schlecht [#das-ausfuhren-in-unsloth-funktioniert-gut-aber-nach-dem-exportieren-und-ausfuhren-auf-anderen-plattf]
    • <h3> hashtagSpeichern in GGUF / vLLM 16bit stürzt ab [#speichern-in-gguf-vllm-16bit-sturzt-ab]
    • <h3> hashtagWie speichere ich manuell in GGUF? [#wie-speichere-ich-manuell-in-gguf]
43/docs/de/grundlagen/inference-and-deployment/saving-to-gguf
  • <h1> Enregistrement au format GGUF
    • <h3> hashtagL'exécution dans Unsloth fonctionne bien, mais après exportation et exécution sur d'autres plateformes, les résultats sont médiocres [#lexecution-dans-unsloth-fonctionne-bien-mais-apres-exportation-et-execution-sur-dautres-plateformes]
    • <h3> hashtagLa sauvegarde en GGUF / vLLM 16 bits plante [#la-sauvegarde-en-gguf-vllm-16-bits-plante]
    • <h3> hashtagComment sauvegarder manuellement en GGUF ? [#comment-sauvegarder-manuellement-en-gguf]
43/docs/fr/notions-de-base/inference-and-deployment/saving-to-gguf
  • <h1> vscodeWie man LLMs in VS Code mit Unsloth & Colab-GPUs feinabstimmt
    • <h3> hashtagVS Code- und Colab-Tutorial: [#vs-code-und-colab-tutorial]
    • <h3> hashtagVideo-Tutorial [#video-tutorial]
    • <h3> hashtagFehlerbehebung [#fehlerbehebung]
43/docs/de/los-gehts/install/vs-code
  • <h1> vscodeVS CodeとColab GPUでLLMをファインチューニングする方法
    • <h3> hashtagVS Code と Colab チュートリアル: [#vs-code-to-colab-chtoriaru]
    • <h3> hashtagビデオチュートリアル [#bideochtoriaru]
    • <h3> hashtagトラブルシューティング [#toraburushtingu]
43/docs/jp/meru/install/vs-code
  • <h1> vscode如何在 VS Code 中使用 Unsloth 和 Colab GPU 微调 LLM
    • <h3> hashtagVS Code 与 Colab 教程: [#vs-code-yu-colab-jiao-cheng]
    • <h3> hashtag视频教程 [#shi-pin-jiao-cheng]
    • <h3> hashtag故障排除 [#gu-zhang-pai-chu]
43/docs/zh/kai-shi-shi-yong/install/vs-code
  • <h1> vscodeComment fine-tuner des LLM dans VS Code avec Unsloth et les GPU Colab
    • <h3> hashtagTutoriel VS Code et Colab : [#tutoriel-vs-code-et-colab]
    • <h3> hashtagTutoriel vidéo [#tutoriel-video]
    • <h3> hashtagDépannage [#depannage]
43/docs/fr/commencer/install/vs-code
  • <h1> ⁉️FP16 vs. BF16 für RL
    • <h3> hashtagFloat16 vs Bfloat16 [#float16-vs-bfloat16]
    • <h3> hashtag🤯A100 Cascade-Attention-Fehler [#a100-cascade-attention-fehler]
    • <h3> hashtag🔥Verwendung von Float16 in Unsloth RL [#verwendung-von-float16-in-unsloth-rl]
43/docs/de/los-gehts/reinforcement-learning-rl-guide/advanced-rl-documentation/fp16-vs-bf16-for-rl
  • <h1> ⁉️RLにおけるFP16 vs BF16
    • <h3> hashtagFloat16 と Bfloat16 の比較 [#float16-to-bfloat16-no]
    • <h3> hashtag🤯A100 のカスケードアテンションのバグ [#a100-nokasukdoatenshonnobagu]
    • <h3> hashtag🔥Unsloth RL での float16 の使用 [#unsloth-rl-deno-float16-no]
43/docs/jp/meru/reinforcement-learning-rl-guide/advanced-rl-documentation/fp16-vs-bf16-for-rl
  • <h1> ⁉️FP16 vs BF16 pour le RL
    • <h3> hashtagFloat16 vs Bfloat16 [#float16-vs-bfloat16]
    • <h3> hashtag🤯Bug d'attention en cascade sur A100 [#bug-dattention-en-cascade-sur-a100]
    • <h3> hashtag🔥Utiliser le float16 dans Unsloth RL [#utiliser-le-float16-dans-unsloth-rl]
43/docs/fr/commencer/reinforcement-learning-rl-guide/advanced-rl-documentation/fp16-vs-bf16-for-rl
  • <h1> ⁉️RL 中的 FP16 与 BF16
    • <h3> hashtagFloat16 与 Bfloat16 [#float16-yu-bfloat16]
    • <h3> hashtag🤯A100 级联注意力错误 [#a100-ji-lian-zhu-yi-li-cuo-wu]
    • <h3> hashtag🔥在 Unsloth RL 中使用 float16 [#zai-unsloth-rl-zhong-shi-yong-float16]
43/docs/zh/kai-shi-shi-yong/reinforcement-learning-rl-guide/advanced-…ocumentation/fp16-vs-bf16-for-rl
  • <h1> microchip-aiFine-Tuning LLMs on NVIDIA DGX Station with Unsloth
    • <h3> hashtagQuickstart [#quickstart]
    • <h3> hashtagTraining Tutorials [#training-tutorials]
32/docs/basics/dgx-station
  • <h1> waveformMiniMax-M2.5: How to Run Guide
    • <h3> hashtag⚙️ Usage Guide [#usage-guide]
    • <h3> hashtagRecommended Settings [#recommended-settings]
    • <h2> hashtagRun MiniMax-M2.5 Tutorials: [#run-minimax-m2.5-tutorials]
      • <h3> hashtag🦙 Llama-server & OpenAI's completion library [#llama-server-and-openais-completion-library]
    • <h2> hashtag📊 Benchmarks [#benchmarks]
      • <h3> hashtagUnsloth GGUF Benchmarks [#unsloth-gguf-benchmarks]
      • <h3> hashtagOfficial Benchmarks [#official-benchmarks]
82/docs/models/minimax-m25
  • <h1> zGLM-5: How to Run Locally Guide
    • <h3> hashtag⚙️ Usage Guide [#usage-guide]
    • <h3> hashtagRecommended Settings [#recommended-settings]
    • <h2> hashtagRun GLM-5 Tutorials: [#run-glm-5-tutorials]
      • <h3> hashtag🦙 Llama-server serving & OpenAI's completion library [#llama-server-serving-and-openais-completion-library]
      • <h3> hashtag💻 vLLM Deployment [#vllm-deployment]
      • <h3> hashtag🔨Tool Calling with GLM 5 [#tool-calling-with-glm-5]
      • <h3> hashtag📊 Benchmarks [#benchmarks]
82/docs/models/glm-5
  • <h1> square-up-rightFine-tuning LLMs on AMD GPUs with Unsloth Guide
    • <h3> hashtag🔢 Reinforcement Learning on AMD GPUs [#reinforcement-learning-on-amd-gpus]
    • <h3> hashtag📚AMD Free One-click notebooks [#amd-free-one-click-notebooks]
32/docs/get-started/install/amd
  • <h1> microchip-aiFine-tuning de LLM sur NVIDIA DGX Station avec Unsloth
    • <h3> hashtagDémarrage rapide [#demarrage-rapide]
    • <h3> hashtagTutoriels d'entraînement [#tutoriels-dentrainement]
32/docs/fr/notions-de-base/dgx-station
  • <h1> microchip-aiUnslothを使ってNVIDIA DGX StationでLLMをファインチューニングする
    • <h3> hashtagクイックスタート [#kuikkusutto]
    • <h3> hashtagトレーニングチュートリアル [#torninguchtoriaru]
32/docs/jp/ji-ben/dgx-station
  • <h1> microchip-aiFine-Tuning von LLMs auf NVIDIA DGX Station mit Unsloth
    • <h3> hashtagSchnellstart [#schnellstart]
    • <h3> hashtagTrainingstutorials [#trainingstutorials]
32/docs/de/grundlagen/dgx-station
  • <h1> microchip-ai使用 Unsloth 在 NVIDIA DGX Station 上微调 LLM
    • <h3> hashtag快速开始 [#kuai-su-kai-shi]
    • <h3> hashtag训练教程 [#xun-lian-jiao-cheng]
32/docs/zh/ji-chu-zhi-shi/dgx-station
  • <h1> 📱How to Run and Deploy LLMs on your iOS or Android Phone
    • <h3> hashtag🦥 Training Your Model [#training-your-model]
    • <h3> hashtag🏁 Deployment After Training [#deployment-after-training]
    • <h2> hashtagapple iOS Deployment [#ios-deployment]
      • <h3> hashtagmacOS Development Environment Setup [#macos-development-environment-setup]
      • <h3> hashtagApple Developer Account Setup [#apple-developer-account-setup]
      • <h3> hashtagSetup the ExecuTorch Demo App [#setup-the-executorch-demo-app]
      • <h3> hashtagDeploying to Simulator [#deploying-to-simulator]
      • <h3> hashtagDeploying to Your Physical iPhone [#deploying-to-your-physical-iphone]
    • <h2> hashtagandroid Android Deployment [#android-deployment]
      • <h3> hashtagRequirements [#requirements]
      • <h3> hashtagStep 1: Install Android SDK & NDK [#step-1-install-android-sdk-and-ndk]
      • <h3> hashtagStep 2: Configure Environment Variables [#step-2-configure-environment-variables]
      • <h3> hashtagStep 3: Install SDK Components [#step-3-install-sdk-components]
      • <h3> hashtagStep 4: Get the Code [#step-4-get-the-code]
      • <h3> hashtagStep 5: Fix Common Compilation Issues [#step-5-fix-common-compilation-issues]
      • <h3> hashtagStep 6: Build the APK [#step-6-build-the-apk]
      • <h3> hashtagStep 7: Install on your Android device [#step-7-install-on-your-android-device]
      • <h3> hashtagStep 8: Transfer Model Files [#step-8-transfer-model-files]
      • <h3> hashtagTroubleshooting [#troubleshooting]
      • <h3> hashtagTransferring model to your phone [#transferring-model-to-your-phone]
      • <h3> hashtag📱ExecuTorch powers billions [#docs-internal-guid-7d7d5aee-7fff-f138-468c-c35853fee9ca]
    • <h2> hashtagOther model support [#other-model-support]
232/docs/basics/inference-and-deployment/deploy-llms-phone
  • <h1> waveformMiniMax-M2.5:运行指南
    • <h3> hashtag⚙️ 使用指南 [#shi-yong-zhi-nan]
    • <h3> hashtag推荐设置 [#tui-jian-she-zhi]
    • <h2> hashtag运行 MiniMax-M2.5 教程: [#yun-xing-minimaxm2.5-jiao-cheng]
      • <h3> hashtag🦙 Llama-server 与 OpenAI 的 completion 库 [#llamaserver-yu-openai-de-completion-ku]
    • <h2> hashtag📊 基准测试 [#ji-zhun-ce-shi]
      • <h3> hashtagUnsloth GGUF 基准 [#unsloth-gguf-ji-zhun]
      • <h3> hashtag官方基准 [#guan-fang-ji-zhun]
82/docs/zh/mo-xing/minimax-m25
  • <h1> waveformMiniMax-M2.5: 実行ガイド
    • <h3> hashtag⚙️ 使用ガイド [#gaido]
    • <h3> hashtag推奨設定 [#tui-jiang-she-ding]
    • <h2> hashtagMiniMax-M2.5チュートリアルを実行する: [#minimax-m25chtoriaruwosuru]
      • <h3> hashtag🦙 Llama-server と OpenAI の completion ライブラリ [#llama-server-to-openai-no-completion-raiburari]
    • <h2> hashtag📊 ベンチマーク [#benchimku]
      • <h3> hashtagUnsloth GGUF ベンチマーク [#unsloth-gguf-benchimku]
      • <h3> hashtag公式ベンチマーク [#benchimku-1]
82/docs/jp/moderu/minimax-m25
  • <h1> waveformMiniMax-M2.5 : guide d'exécution
    • <h3> hashtag⚙️ Guide d'utilisation [#guide-dutilisation]
    • <h3> hashtagParamètres recommandés [#parametres-recommandes]
    • <h2> hashtagExécuter les tutoriels MiniMax-M2.5 : [#executer-les-tutoriels-minimax-m2.5]
      • <h3> hashtag🦙 Llama-server & la bibliothèque de complétions d'OpenAI [#llama-server-and-la-bibliotheque-de-completions-dopenai]
    • <h2> hashtag📊 Références de performance [#references-de-performance]
      • <h3> hashtagRepères Unsloth GGUF [#reperes-unsloth-gguf]
      • <h3> hashtagBenchmarks officiels [#benchmarks-officiels]
82/docs/fr/modeles/minimax-m25
  • <h1> waveformMiniMax-M2.5: Anleitung zum Ausführen
    • <h3> hashtag⚙️ Nutzungsanleitung [#nutzungsanleitung]
    • <h3> hashtagEmpfohlene Einstellungen [#empfohlene-einstellungen]
    • <h2> hashtagFühren Sie MiniMax-M2.5 Tutorials aus: [#fuhren-sie-minimax-m2.5-tutorials-aus]
      • <h3> hashtag🦙 Llama-server & OpenAIs Completion-Bibliothek [#llama-server-and-openais-completion-bibliothek]
    • <h2> hashtag📊 Benchmarks [#benchmarks]
      • <h3> hashtagUnsloth GGUF Benchmarks [#unsloth-gguf-benchmarks]
      • <h3> hashtagOffizielle Benchmarks [#offizielle-benchmarks]
82/docs/de/modelle/minimax-m25
  • <h1> Multi-GPU Fine-tuning with Distributed Data Parallel (DDP)
    • <h3> hashtagUse the Unsloth CLI! [#use-the-unsloth-cli]
    • <h3> hashtagTraining metrics [#training-metrics]
32/docs/basics/multi-gpu-training-with-unsloth/ddp
  • <h1> zGLM-5 : guide pour exécuter localement
    • <h3> hashtag⚙️ Guide d'utilisation [#guide-dutilisation]
    • <h3> hashtagParamètres recommandés [#parametres-recommandes]
    • <h2> hashtagExécutez les tutoriels GLM-5 : [#executez-les-tutoriels-glm-5]
      • <h3> hashtag🦙 Service Llama-server & bibliothèque de complétion d'OpenAI [#service-llama-server-and-bibliotheque-de-completion-dopenai]
      • <h3> hashtag💻 Déploiement vLLM [#deploiement-vllm]
      • <h3> hashtag🔨Appel d'outils avec GLM 5 [#appel-doutils-avec-glm-5]
      • <h3> hashtag📊 Benchmarks [#benchmarks]
82/docs/fr/modeles/glm-5
  • <h1> zGLM-5:如何本地运行指南
    • <h3> hashtag⚙️ 使用指南 [#shi-yong-zhi-nan]
    • <h3> hashtag推荐设置 [#tui-jian-she-zhi]
    • <h2> hashtag运行 GLM-5 教程: [#yun-xing-glm5-jiao-cheng]
      • <h3> hashtag🦙 Llama-server 服务与 OpenAI 的 completion 库 [#llamaserver-fu-wu-yu-openai-de-completion-ku]
      • <h3> hashtag💻 vLLM 部署 [#vllm-bu-shu]
      • <h3> hashtag🔨使用 GLM 5 的工具调用 [#shi-yong-glm-5-de-gong-ju-diao-yong]
      • <h3> hashtag📊 基准测试 [#ji-zhun-ce-shi]
82/docs/zh/mo-xing/glm-5
  • <h1> zGLM-5: ローカル実行ガイド
    • <h3> hashtag⚙️ 使用ガイド [#gaido]
    • <h3> hashtag推奨設定 [#tui-jiang-she-ding]
    • <h2> hashtagGLM-5チュートリアルを実行する: [#glm-5chtoriaruwosuru]
      • <h3> hashtag🦙 Llama-serverのサービングとOpenAIのcompletionライブラリ [#llama-servernosbingutoopenainocompletionraiburari]
      • <h3> hashtag💻 vLLMデプロイメント [#vllmdepuroimento]
      • <h3> hashtag🔨GLM 5によるツールコーリング [#glm-5niyorutsrukringu]
      • <h3> hashtag📊 ベンチマーク [#benchimku]
82/docs/jp/moderu/glm-5
  • <h1> zGLM-5: Anleitung zum lokalen Ausführen
    • <h3> hashtag⚙️ Gebrauchsanleitung [#gebrauchsanleitung]
    • <h3> hashtagEmpfohlene Einstellungen [#empfohlene-einstellungen]
    • <h2> hashtagFühren Sie GLM-5 Tutorials aus: [#fuhren-sie-glm-5-tutorials-aus]
      • <h3> hashtag🦙 Llama-Server Bereitstellung & OpenAIs Completion-Bibliothek [#llama-server-bereitstellung-and-openais-completion-bibliothek]
      • <h3> hashtag💻 vLLM-Bereitstellung [#vllm-bereitstellung]
      • <h3> hashtag🔨Tool-Aufrufe mit GLM 5 [#tool-aufrufe-mit-glm-5]
      • <h3> hashtag📊 Benchmarks [#benchmarks]
82/docs/de/modelle/glm-5
  • <h1> square-up-rightUnslothを使ったAMD GPUでのLLMファインチューニングガイド
    • <h3> hashtag🔢 AMD GPUでの強化学習 [#amd-gpudeno]
    • <h3> hashtag📚AMDの無料ワンクリックノートブック [#amdnowankurikkuntobukku]
32/docs/jp/meru/install/amd
  • <h1> square-up-rightAnleitung zum Fine-Tuning von LLMs auf AMD-GPUs mit Unsloth
    • <h3> hashtag🔢 Verstärkungslernen auf AMD-GPUs [#verstarkungslernen-auf-amd-gpus]
    • <h3> hashtag📚AMD Free One-Click-Notebooks [#amd-free-one-click-notebooks]
32/docs/de/los-gehts/install/amd
  • <h1> square-up-rightGuide pour fine-tuner des LLM sur les GPU AMD avec Unsloth
    • <h3> hashtag🔢 Apprentissage par renforcement sur GPU AMD [#apprentissage-par-renforcement-sur-gpu-amd]
    • <h3> hashtag📚Notebooks AMD gratuits en un clic [#notebooks-amd-gratuits-en-un-clic]
32/docs/fr/commencer/install/amd
  • <h1> square-up-right使用 Unsloth 在 AMD GPU 上微调 LLM 指南
    • <h3> hashtag🔢 在 AMD GPU 上的强化学习 [#zai-amd-gpu-shang-de-qiang-hua-xue-xi]
    • <h3> hashtag📚AMD 免费一键笔记本 [#amd-mian-fei-yi-jian-bi-ji-ben]
32/docs/zh/kai-shi-shi-yong/install/amd
  • <h1> vLLM Engine Arguments
    • <h3> hashtag🎉Float8 Quantization [#float8-quantization]
    • <h3> hashtag🍧LoRA Hot Swapping / Dynamic LoRAs [#lora-hot-swapping-dynamic-loras]
32/docs/basics/inference-and-deployment/vllm-guide/vllm-engine-arguments
  • <h1> 📱Comment exécuter et déployer des LLM sur votre téléphone iOS ou Android
    • <h3> hashtag🦥 Entraînement de votre modèle [#entrainement-de-votre-modele]
    • <h3> hashtag🏁 Déploiement après l'entraînement [#deploiement-apres-lentrainement]
    • <h2> hashtagapple Déploiement iOS [#deploiement-ios]
      • <h3> hashtagConfiguration de l'environnement de développement macOS [#configuration-de-lenvironnement-de-developpement-macos]
      • <h3> hashtagConfiguration du compte développeur Apple [#configuration-du-compte-developpeur-apple]
      • <h3> hashtagConfigurer l'application de démonstration ExecuTorch [#configurer-lapplication-de-demonstration-executorch]
      • <h3> hashtagDéploiement sur le simulateur [#deploiement-sur-le-simulateur]
      • <h3> hashtagDéploiement sur votre iPhone physique [#deploiement-sur-votre-iphone-physique]
    • <h2> hashtagandroid Déploiement Android [#deploiement-android]
      • <h3> hashtagPrérequis [#prerequis]
      • <h3> hashtagÉtape 1 : Installer Android SDK & NDK [#etape-1-installer-android-sdk-and-ndk]
      • <h3> hashtagÉtape 2 : Configurer les variables d'environnement [#etape-2-configurer-les-variables-denvironnement]
      • <h3> hashtagÉtape 3 : Installer les composants du SDK [#etape-3-installer-les-composants-du-sdk]
      • <h3> hashtagÉtape 4 : Récupérer le code [#etape-4-recuperer-le-code]
      • <h3> hashtagÉtape 5 : Corriger les problèmes de compilation courants [#etape-5-corriger-les-problemes-de-compilation-courants]
      • <h3> hashtagCette étape compile l'app et les bibliothèques natives. [#cette-etape-compile-lapp-et-les-bibliotheques-natives]
      • <h3> hashtagVous avez deux options pour installer l'app. [#vous-avez-deux-options-pour-installer-lapp]
      • <h3> hashtagL'application a besoin du modèle .pte et des fichiers tokenizer. [#lapplication-a-besoin-du-modele-.pte-et-des-fichiers-tokenizer]
      • <h3> hashtagLa compilation échoue ? Vérifiez java -version. Elle DOIT être 17. [#la-compilation-echoue-verifiez-java-version.-elle-doit-etre-17]
      • <h3> hashtagActuellement, [#actuellement]
      • <h3> hashtag📱alimente des expériences ML sur appareil pour des milliards de personnes [#docs-internal-guid-7d7d5aee-7fff-f138-468c-c35853fee9ca]
    • <h2> hashtagTous les modèles denses Qwen 3 ( [#tous-les-modeles-denses-qwen-3]
232/docs/fr/notions-de-base/inference-and-deployment/deploy-llms-phone
  • <h1> 📱Wie man LLMs auf deinem iOS- oder Android-Smartphone ausführt und bereitstellt
    • <h3> hashtag🦥 Ihr Modell trainieren [#ihr-modell-trainieren]
    • <h3> hashtag🏁 Bereitstellung nach dem Training [#bereitstellung-nach-dem-training]
    • <h2> hashtagapple iOS-Bereitstellung [#ios-bereitstellung]
      • <h3> hashtagEinrichtung der macOS-Entwicklungsumgebung [#einrichtung-der-macos-entwicklungsumgebung]
      • <h3> hashtagApple-Entwicklerkonto einrichten [#apple-entwicklerkonto-einrichten]
      • <h3> hashtagEinrichten der ExecuTorch-Demo-App [#einrichten-der-executorch-demo-app]
      • <h3> hashtagBereitstellung im Simulator [#bereitstellung-im-simulator]
      • <h3> hashtagBereitstellung auf Ihrem physischen iPhone [#bereitstellung-auf-ihrem-physischen-iphone]
    • <h2> hashtagandroid Android-Bereitstellung [#android-bereitstellung]
      • <h3> hashtagAnforderungen [#anforderungen]
      • <h3> hashtagSchritt 1: Android SDK & NDK installieren [#schritt-1-android-sdk-and-ndk-installieren]
      • <h3> hashtagSchritt 2: Umgebungsvariablen konfigurieren [#schritt-2-umgebungsvariablen-konfigurieren]
      • <h3> hashtagSchritt 3: SDK-Komponenten installieren [#schritt-3-sdk-komponenten-installieren]
      • <h3> hashtagSchritt 4: Code holen [#schritt-4-code-holen]
      • <h3> hashtagSchritt 5: Häufige Kompilationsprobleme beheben [#schritt-5-haufige-kompilationsprobleme-beheben]
      • <h3> hashtagSchritt 6: APK bauen [#schritt-6-apk-bauen]
      • <h3> hashtagSchritt 7: Auf Ihrem Android-Gerät installieren [#schritt-7-auf-ihrem-android-gerat-installieren]
      • <h3> hashtagSchritt 8: Modelldateien übertragen [#schritt-8-modelldateien-ubertragen]
      • <h3> hashtagFehlerbehebung [#fehlerbehebung]
      • <h3> hashtagModell auf Ihr Telefon übertragen [#modell-auf-ihr-telefon-ubertragen]
      • <h3> hashtag📱ExecuTorch betreibt Milliarden [#docs-internal-guid-7d7d5aee-7fff-f138-468c-c35853fee9ca]
    • <h2> hashtagWeitere Modellunterstützung [#weitere-modellunterstutzung]
232/docs/de/grundlagen/inference-and-deployment/deploy-llms-phone
You have reached the hard limit of 200 rows as a protection against very large output or exhausted memory. You can change this with --rows-limit.
No rows found, please edit your search term.

404 URLs

No 404 URLs found.

Redirected URLs

Found 28 row(s).
StatusRedirected URL 🔼Target URLFound at URL
308 /docs//docs/docs/get-started/unsloth-model-catalog
307 /docs/basics/docs/basics/inference-and-deployment/docs/basics/finetuning-from-last-checkpoint
307 /docs/blog/docs/blog/3x-faster-training-packing/docs/blog/3x-faster-training-packing
307 /docs/de/blog/docs/de/blog/3x-faster-training-packing/docs/de/blog/3x-faster-training-packing
307 /docs/de/grundlagen/docs/de/grundlagen/inference-and-deployment/docs/de/grundlagen/finetuning-from-last-checkpoint
307 /docs/de/los-gehts/docs/de/docs/de
307 /docs/de/modelle/docs/de/modelle/qwen3.5/docs/de/modelle/gpt-oss-how-to-run-and-fine-tune
307 /docs/de/neu/docs/de/neu/studio/docs/de/neu/studio/chat
307 /docs/fr/blog/docs/fr/blog/3x-faster-training-packing/docs/fr/blog/3x-faster-training-packing
307 /docs/fr/commencer/docs/fr/docs/fr
307 /docs/fr/modeles/docs/fr/modeles/qwen3.5/docs/fr/modeles/gpt-oss-how-to-run-and-fine-tune
307 /docs/fr/notions-de-base/docs/fr/notions-de-base/inference-and-deployment/docs/fr/notions-de-base/finetuning-from-last-checkpoint
307 /docs/fr/nouveau/docs/fr/nouveau/studio/docs/fr/nouveau/studio/chat
307 /docs/get-started/docs/docs
307 /docs/jp/burogu/docs/jp/burogu/3x-faster-training-packing/docs/jp/burogu/3x-faster-training-packing
307 /docs/jp/ji-ben/docs/jp/ji-ben/inference-and-deployment/docs/jp/ji-ben/finetuning-from-last-checkpoint
307 /docs/jp/meru/docs/jp/docs/jp
307 /docs/jp/moderu/docs/jp/moderu/qwen3.5/docs/jp/moderu/gpt-oss-how-to-run-and-fine-tune
307 /docs/jp/xin-ji-neng/docs/jp/xin-ji-neng/studio/docs/jp/xin-ji-neng/studio/chat
307 /docs/models/docs/models/qwen3.5/docs/models/gpt-oss-how-to-run-and-fine-tune
307 /docs/models/qwen3-coder-how-to-run-locally/docs/models/tutorials/qwen3-coder-how-to-run-locally/docs/basics/tool-calling-guide-for-local-llms
307 /docs/models/tutorials/nemotron-3/docs/models/nemotron-3/docs/basics/tool-calling-guide-for-local-llms
307 /docs/new/docs/new/studio/docs/new/studio/chat
307 /docs/zh/bo-ke/docs/zh/bo-ke/3x-faster-training-packing/docs/zh/bo-ke/3x-faster-training-packing
307 /docs/zh/ji-chu-zhi-shi/docs/zh/ji-chu-zhi-shi/inference-and-deployment/docs/zh/ji-chu-zhi-shi/finetuning-from-last-checkpoint
307 /docs/zh/kai-shi-shi-yong/docs/zh/docs/zh
307 /docs/zh/mo-xing/docs/zh/mo-xing/qwen3.5/docs/zh/mo-xing/gpt-oss-how-to-run-and-fine-tune
307 /docs/zh/xin-zeng/docs/zh/xin-zeng/studio/docs/zh/xin-zeng/studio/chat
No rows found, please edit your search term.

Skipped URLs Summary

Found 66 row(s).
ReasonDomainUnique URLs 🔽
Not allowed hosthuggingface.co592
Not allowed hostgithub.com68
Not allowed hostx.com18
Not allowed hostcolab.research.google.com15
Not allowed hostdocs.unsloth.ai12
Not allowed hostwww.kaggle.com8
Not allowed hostwww.linkedin.com7
Not allowed hostlmstudio.ai6
Not allowed hostwww.gitbook.com5
Not allowed hosten.wikipedia.org4
Not allowed hostwww.reddit.com4
Not allowed hostdocs.vllm.ai4
Not allowed hostdocs.docker.com3
Not allowed hostopen-2v.gitbook.com3
Not allowed hosthub.docker.com3
Not allowed hostdiscord.gg2
Not allowed hostwww.anaconda.com2
Not allowed hostwww.nvidia.com2
Not allowed hostpytorch.org2
Not allowed hostdocs.pytorch.org2
Not allowed hostdiscord.com2
Not allowed hostdeveloper.nvidia.com2
Not allowed hostdeveloper.apple.com2
Not allowed hostai.google.dev1
Not allowed hostbrew.sh1
Not allowed hostlocalhost1
Not allowed hostaider.chat1
Not allowed hostdocs.nvidia.com1
Not allowed hostgitbook.com1
Not allowed hostwww.comfy.org1
Not allowed hostplatform.openai.com1
Not allowed hostwww.xda-developers.com1
Not allowed hostnews.ycombinator.com1
Not allowed hostsimonwillison.net1
Not allowed hostwww.youtube.com1
Not allowed hostyoutu.be1
Not allowed hostcode.claude.com1
Not allowed hostai.meta.com1
Not allowed hostplay.google.com1
Not allowed hostcode.visualstudio.com1
Not allowed hostdocs.sglang.ai1
Not allowed hosttriton-lang.org1
Not allowed hostdocs.openwebui.com1
Not allowed hostopenai.com1
Not allowed hostsupport.apple.com1
Not allowed hostlinkedin.com1
Not allowed hostapi-docs.deepseek.com1
Not allowed hostyingru.notion.site1
Not allowed hostdocs.oracle.com1
Not allowed hostrlhfbook.com1
Not allowed hosttwitter.com1
Not allowed hostmarketplace.visualstudio.com1
Not allowed hostwww.google.com1
Not allowed hostdocs.python.org1
Not allowed hostraw.githubusercontent.com1
Not allowed hostapp.gitbook.com1
Not allowed hostwww.llama.com1
Not allowed hostengineering.fb.com1
Not allowed hostblogs.nvidia.com1
Not allowed hostlu.ma1
Not allowed hostdevelopers.openai.com1
Not allowed hostwww.evanmiller.org1
Not allowed hostlearn.microsoft.com1
Not allowed hostqwenlm.github.io1
Not allowed hostdocs.anaconda.com1
Not allowed hostreddit.com1
No rows found, please edit your search term.

Skipped URLs

Found 200 row(s).
ReasonSkipped URL 🔼SourceFound at URL
Not allowed hosthttp://localhost:8888/<a href>/docs/get-started/install/windows-installation
Not allowed hosthttp://twitter.com/UnslothAI<a href>/docs
Not allowed hosthttps://ai.google.dev/gemma/docs/gemma-3n<a href>/docs/models/tutorials/gemma-3-how-to-run-and-fine-tune/gemma-3n-how-to-run-and-fine-tune
Not allowed hosthttps://ai.meta.com/blog/executorch-reality-labs-on-device-ai/<a href>/docs/basics/inference-and-deployment/deploy-llms-phone
Not allowed hosthttps://aider.chat/docs/leaderboards/<a href>/docs/basics/unsloth-dynamic-2.0-ggufs/unsloth-dynamic-ggufs-on-aider-polyglot
Not allowed hosthttps://api-docs.deepseek.com/quick_start/parameter_settings<a href>/docs/models/tutorials/deepseek-v3-0324-how-to-run-locally
Not allowed hosthttps://app.gitbook.com/o/HpyELzcNe0topgVLGCZY/s/xhOjnexMCB3dmuQFQ2…evstral-how-to-run-and-fine-tune<a href>/docs/basics/unsloth-dynamic-2.0-ggufs/unsloth-dynamic-ggufs-on-aider-polyglot
Not allowed hosthttps://blogs.nvidia.com/blog/rtx-ai-garage-fine-tuning-unsloth-dgx-spark/<a href>/docs/models/nemotron-3
Not allowed hosthttps://brew.sh/<a href>/docs/get-started/fine-tuning-for-beginners/unsloth-requirements
Not allowed hosthttps://code.claude.com/docs/en/vs-code<a href>/docs/basics/claude-code
Not allowed hosthttps://code.visualstudio.com/<a href>/docs/get-started/install/vs-code
Not allowed hosthttps://colab.research.google.com/drive/12Uw8y5beLzPtx11mCWCYyh9Z_PEHHdId?usp=sharing<a href>/docs/get-started/reinforcement-learning-rl-guide/memory-efficient-rl
Not allowed hosthttps://colab.research.google.com/drive/15F1xyn8497_dUbxZP4zWmPZ3PJx1Oymv?usp=sharing<a href>/docs/get-started/unsloth-notebooks
Not allowed hosthttps://colab.research.google.com/drive/18CssBY5C0mStnLvu2Hlt4aFLoPugRG0K?usp=sharing<a href>/docs/get-started/reinforcement-learning-rl-guide/memory-efficient-rl
Not allowed hosthttps://colab.research.google.com/drive/1GwTlaP5CLsW-BcE1LqZWkz6S8VTWYdJ2?usp=sharing<a href>/docs/get-started/reinforcement-learning-rl-guide/memory-efficient-rl
Not allowed hosthttps://colab.research.google.com/drive/1Hs5oQDovOay4mFA6Y9lQhVJ8TnbFLFh2?usp=sharing<a href>/docs/get-started/reinforcement-learning-rl-guide/preference-dpo-orpo-and-kto
Not allowed hosthttps://colab.research.google.com/drive/1IuSUNzEBTiURK-vbTQuRDuUl0Ya2pz2t?usp=sharing<a href>/docs/get-started/reinforcement-learning-rl-guide/memory-efficient-rl
Not allowed hosthttps://colab.research.google.com/drive/1MRgGtLWuZX4ypSfGguFgC-IblTvO2ivM?usp=sharing<a href>/docs/get-started/unsloth-notebooks
Not allowed hosthttps://colab.research.google.com/drive/1RY7HwpZ0luJT70OyLJ6zXKZQ2COdT9QJ?usp=sharing<a href>/docs/get-started/reinforcement-learning-rl-guide/memory-efficient-rl
Not allowed hosthttps://colab.research.google.com/drive/1VYkncZMfGFkeCEgN2IzbZIKEDkyQuJAS?usp=sharing<a href>/docs/basics/chat-templates
Not allowed hosthttps://colab.research.google.com/drive/1WZDi7APtQ9VsvOrQSSC5DDtxq159j8iZ?usp=sharing<a href>/docs/basics/chat-templates
Not allowed hosthttps://colab.research.google.com/drive/1aqlNQi7MMJbynFDyOQteD2t0yVfjb9Zh?usp=sharing<a href>/docs/basics/inference-and-deployment/unsloth-inference
Not allowed hosthttps://colab.research.google.com/drive/1njCCbE1YVal9xC83hjdo2hiGItpY_D6t?usp=sharing<a href>/docs/get-started/unsloth-notebooks
Not allowed hosthttps://colab.research.google.com/drive/1q0TOUychygfreI2wKpg51sqnRhs5cYnX?usp=sharing<a href>/docs/get-started/reinforcement-learning-rl-guide/memory-efficient-rl
Not allowed hosthttps://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.1_(8B)-Alpaca.ipynb<a href>/docs/get-started/fine-tuning-llms-guide/datasets-guide
Not allowed hosthttps://colab.research.google.com/github/unslothai/notebooks/blob/m…n/nb/Llama3.2_(11B)-Vision.ipynb<a href>/docs/basics/vision-fine-tuning
Not allowed hosthttps://developer.apple.com/documentation/safari-developer-tools/adding-additional-simulators<a href>/docs/basics/inference-and-deployment/deploy-llms-phone
Not allowed hosthttps://developer.apple.com/documentation/xcode/downloading-and-ins…ling-additional-xcode-components<a href>/docs/basics/inference-and-deployment/deploy-llms-phone
Not allowed hosthttps://developer.nvidia.com/blog/train-an-llm-on-an-nvidia-blackwe…sktop-with-unsloth-and-scale-it/<a href>/docs/blog/fine-tuning-llms-with-blackwell-rtx-50-series-and-unsloth
Not allowed hosthttps://developer.nvidia.com/cuda-gpus<a href>/docs/get-started/fine-tuning-for-beginners/unsloth-requirements
Not allowed hosthttps://developers.openai.com/codex/windows/<a href>/docs/basics/codex
Not allowed hosthttps://discord.com/channels/1131200896827654144/1408293692074360914<a href>/docs/basics/unsloth-dynamic-2.0-ggufs/unsloth-dynamic-ggufs-on-aider-polyglot
Not allowed hosthttps://discord.com/invite/unsloth<a href>/docs
Not allowed hosthttps://discord.gg/ollama<a href>/docs/get-started/fine-tuning-llms-guide/tutorial-how-to-finetune-llama-3-and-use-in-ollama
Not allowed hosthttps://discord.gg/unsloth<a href>/docs
Not allowed hosthttps://docs.anaconda.com/miniconda/<a href>/docs/get-started/install/conda-install
Not allowed hosthttps://docs.docker.com/ai/model-runner/get-started/<a href>/docs/models/tutorials/how-to-run-llms-with-docker
Not allowed hosthttps://docs.docker.com/desktop/<a href>/docs/get-started/install/windows-installation
Not allowed hosthttps://docs.docker.com/engine/install/<a href>/docs/get-started/install/windows-installation
Not allowed hosthttps://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html<a href>/docs/get-started/install/windows-installation
Not allowed hosthttps://docs.openwebui.com/tutorials/integrations/deepseekr1-dynamic/<a href>/docs/models/tutorials/deepseek-r1-0528-how-to-run-locally
Not allowed hosthttps://docs.oracle.com/en/java/javase/25/install/overview-jdk-installation.html<a href>/docs/basics/inference-and-deployment/deploy-llms-phone
Not allowed hosthttps://docs.python.org/3/tutorial/venv.html<a href>/docs/basics/multi-gpu-training-with-unsloth/ddp
Not allowed hosthttps://docs.pytorch.org/docs/main/generated/torch.nn.functional.grouped_mm.html<a href>/docs/new/faster-moe
Not allowed hosthttps://docs.pytorch.org/docs/stable/elastic/run.html<a href>/docs/basics/multi-gpu-training-with-unsloth/ddp
Not allowed hosthttps://docs.sglang.ai/advanced_features/server_arguments.html<a href>/docs/basics/inference-and-deployment/sglang-guide
Not allowed hosthttps://docs.unsloth.ai/ai-engineers-2025<a href>/docs/get-started/reinforcement-learning-rl-guide
Not allowed hosthttps://docs.unsloth.ai/basics/chat-templates<a href>/docs/get-started/fine-tuning-llms-guide/datasets-guide
Not allowed hosthttps://docs.unsloth.ai/basics/deepseek-r1-0528-how-to-run-locally<a href>/docs/models/tutorials/deepseek-r1-how-to-run-locally
Not allowed hosthttps://docs.unsloth.ai/basics/gpt-oss<a href>/docs
Not allowed hosthttps://docs.unsloth.ai/basics/reinforcement-learning-guide/tutoria…ur-own-reasoning-model-with-grpo<a href>/docs/get-started/reinforcement-learning-rl-guide
Not allowed hosthttps://docs.unsloth.ai/basics/troubleshooting-and-faqs<a href>/docs/models/tutorials/kimi-k2-thinking-how-to-run-locally
Not allowed hosthttps://docs.unsloth.ai/basics/tutorial-how-to-finetune-llama-3-and-use-in-ollama<a href>/docs/get-started/fine-tuning-llms-guide
Not allowed hosthttps://docs.unsloth.ai/basics/unsloth-dynamic-2.0-ggufs<a href>/docs/get-started/unsloth-model-catalog
Not allowed hosthttps://docs.unsloth.ai/get-started/install-and-update/windows-installation<a href>/docs/get-started/fine-tuning-for-beginners/unsloth-requirements
Not allowed hosthttps://docs.unsloth.ai/get-started/unsloth-notebooks<a href>/docs/blog/fine-tuning-llms-with-nvidia-dgx-spark-and-unsloth
Not allowed hosthttps://docs.unsloth.ai/models/nemotron-3<a href>/docs/models/nemotron-3
Not allowed hosthttps://docs.unsloth.ai/new/gpt-oss-how-to-run-and-fine-tune<a href>/docs
Not allowed hosthttps://docs.vllm.ai/en/latest/features/quantization/torchao.html<a href>/docs/models/tutorials/magistral-how-to-run-and-fine-tune
Not allowed hosthttps://docs.vllm.ai/en/latest/features/sleep_mode.html<a href>/docs/get-started/reinforcement-learning-rl-guide/memory-efficient-rl
Not allowed hosthttps://docs.vllm.ai/en/latest/getting_started/installation/gpu/<a href>/docs/get-started/install/intel
Not allowed hosthttps://docs.vllm.ai/en/stable/getting_started/installation<a href>/docs/basics/inference-and-deployment/vllm-guide
Not allowed hosthttps://en.wikipedia.org/wiki/Bfloat16_floating-point_format<a href>/docs/models/tutorials/gemma-3-how-to-run-and-fine-tune
Not allowed hosthttps://en.wikipedia.org/wiki/Proximal_policy_optimization<a href>/docs/get-started/reinforcement-learning-rl-guide
Not allowed hosthttps://en.wikipedia.org/wiki/Reinforcement_learning_from_human_feedback<a href>/docs/get-started/reinforcement-learning-rl-guide
Not allowed hosthttps://en.wikipedia.org/wiki/Reward_hacking<a href>/docs/models/gpt-oss-how-to-run-and-fine-tune/gpt-oss-reinforcement-learning
Not allowed hosthttps://engineering.fb.com/2025/07/28/android/executorch-on-device-ml-meta-family-of-apps/<a href>/docs/basics/inference-and-deployment/deploy-llms-phone
Not allowed hosthttps://gitbook.com/docs/published-documentation/custom-domain/configure-dns<a href>/docs
Not allowed hosthttps://github.com/Comfy-Org/ComfyUI<a href>/docs/models/tutorials/qwen-image-2512
Not allowed hosthttps://github.com/Dao-AILab/flash-attention/issues/1797<a href>/docs/models/gpt-oss-how-to-run-and-fine-tune/gpt-oss-reinforcement-learning
Not allowed hosthttps://github.com/Mintplex-Labs/anything-llm<a href>/docs/blog/fine-tuning-llms-with-nvidia-dgx-spark-and-unsloth
Not allowed hosthttps://github.com/NVIDIA-NeMo/DataDesigner<a href>/docs/new/studio
Not allowed hosthttps://github.com/NVIDIA-NeMo/Gym/pull/492<a href>/docs/models/nemotron-3
Not allowed hosthttps://github.com/QwenLM/Qwen2.5-VL/issues/759<a href>/docs/get-started/reinforcement-learning-rl-guide/vision-reinforcement-learning-vlm-rl
Not allowed hosthttps://github.com/QwenLM/Qwen3-VL/tree/main?tab=readme-ov-file<a href>/docs/models/qwen3-how-to-run-and-fine-tune/qwen3-vl-how-to-run-and-fine-tune
Not allowed hosthttps://github.com/Vali-98/ChatterUI<a href>/docs/models/tutorials/gemma-3-how-to-run-and-fine-tune
Not allowed hosthttps://github.com/canopyai/Orpheus-TTS<a href>/docs/basics/text-to-speech-tts-fine-tuning
Not allowed hosthttps://github.com/comfyanonymous/ComfyUI<a href>/docs/blog/comfyui
Not allowed hosthttps://github.com/deepspeedai/DeepSpeed/pull/7664<a href>/docs/blog/500k-context-length-fine-tuning
Not allowed hosthttps://github.com/docker/model-runner<a href>/docs/models/tutorials/how-to-run-llms-with-docker
Not allowed hosthttps://github.com/gabriellarson<a href>/docs/models/tutorials/kimi-k2-thinking-how-to-run-locally
Not allowed hosthttps://github.com/ggml-org/llama.cpp/blob/12ee1763a6f6130ce820a366d220bbadff54b818/common/chat.cpp<a href>/docs/basics/inference-and-deployment/llama-server-and-openai-endpoint
Not allowed hosthttps://github.com/ggml-org/llama.cpp/blob/55c509daf51d25bfaee9c8b8…bff103d4473b/src/llama-vocab.cpp<a href>/docs/models/tutorials/kimi-k2-thinking-how-to-run-locally
Not allowed hosthttps://github.com/ggml-org/llama.cpp/issues/14642<a href>/docs/models/tutorials/kimi-k2-thinking-how-to-run-locally
Not allowed hosthttps://github.com/ggml-org/llama.cpp/issues/18323<a href>/docs/basics/inference-and-deployment/llama-server-and-openai-endpoint
Not allowed hosthttps://github.com/ggml-org/llama.cpp/pull/12889<a href>/docs
Not allowed hosthttps://github.com/ggml-org/llama.cpp/pull/14363<a href>/docs/models/gpt-oss-how-to-run-and-fine-tune
Not allowed hosthttps://github.com/ggml-org/llama.cpp/pull/14450<a href>/docs/models/tutorials/gemma-3-how-to-run-and-fine-tune/gemma-3n-how-to-run-and-fine-tune
Not allowed hosthttps://github.com/ggml-org/llama.cpp/pull/14654<a href>/docs/models/tutorials/kimi-k2-thinking-how-to-run-locally
Not allowed hosthttps://github.com/ggml-org/llama.cpp/pull/15539<a href>/docs/models/tutorials/grok-2
Not allowed hosthttps://github.com/ggml-org/llama.cpp/pull/17945<a href>/docs/models/tutorials/devstral-2
Not allowed hosthttps://github.com/ggml-org/llama.cpp/tree/master/examples/parallel<a href>/docs/models/gpt-oss-how-to-run-and-fine-tune
Not allowed hosthttps://github.com/googlecolab/colab-vscode/issues/200<a href>/docs/get-started/install/vs-code
Not allowed hosthttps://github.com/googlecolab/colabtools/issues/3409<a href>/docs/basics/inference-and-deployment/unsloth-inference
Not allowed hosthttps://github.com/huggingface/sentence-transformers<a href>/docs/new/embedding-finetuning
Not allowed hosthttps://github.com/huggingface/transformers/blob/v4.57.6/src/transf…/qwen3_moe/modeling_qwen3_moe.py<a href>/docs/new/faster-moe
Not allowed hosthttps://github.com/huggingface/transformers/blob/v5.0.0/src/transfo…/qwen3_moe/modeling_qwen3_moe.py<a href>/docs/new/faster-moe
Not allowed hosthttps://github.com/huggingface/transformers/pull/40197<a href>/docs/models/gpt-oss-how-to-run-and-fine-tune/long-context-gpt-oss-training
Not allowed hosthttps://github.com/meta-pytorch/attention-gym<a href>/docs/models/gpt-oss-how-to-run-and-fine-tune/long-context-gpt-oss-training
Not allowed hosthttps://github.com/meta-pytorch/attention-gym/issues/15<a href>/docs/models/gpt-oss-how-to-run-and-fine-tune/gpt-oss-reinforcement-learning
Not allowed hosthttps://github.com/mxyng<a href>/docs/models/tutorials/gemma-3-how-to-run-and-fine-tune/gemma-3n-how-to-run-and-fine-tune
Not allowed hosthttps://github.com/ollama/ollama<a href>/docs/get-started/fine-tuning-llms-guide/tutorial-how-to-finetune-llama-3-and-use-in-ollama
Not allowed hosthttps://github.com/ollama/ollama/blob/main/docs/faq.md<a href>/docs/models/tutorials/devstral-how-to-run-and-fine-tune
Not allowed hosthttps://github.com/openai/codex<a href>/docs/basics/codex
Not allowed hosthttps://github.com/openai/harmony<a href>/docs/models/gpt-oss-how-to-run-and-fine-tune
Not allowed hosthttps://github.com/pytorch/ao<a href>/docs/get-started/reinforcement-learning-rl-guide/fp8-reinforcement-learning
Not allowed hosthttps://github.com/pytorch/executorch<a href>/docs/blog/quantization-aware-training-qat
Not allowed hosthttps://github.com/pytorch/executorch/<a href>/docs/basics/inference-and-deployment/deploy-llms-phone
Not allowed hosthttps://github.com/pytorch/executorch/tree/main/examples/models/gemma3<a href>/docs/models/tutorials/functiongemma
Not allowed hosthttps://github.com/pytorch/pytorch/blob/main/RELEASE.md<a href>/docs/get-started/install/windows-installation
Not allowed hosthttps://github.com/sgl-project/sglang<a href>/docs/basics/inference-and-deployment/sglang-guide
Not allowed hosthttps://github.com/tatsu-lab/stanford_alpaca<a href>/docs/get-started/fine-tuning-llms-guide/tutorial-how-to-finetune-llama-3-and-use-in-ollama
Not allowed hosthttps://github.com/triton-lang/triton/blob/main/python/triton_kerne…atmul_ogs_details/_matmul_ogs.py<a href>/docs/models/gpt-oss-how-to-run-and-fine-tune
Not allowed hosthttps://github.com/unslothai/docs/blob/main/basics/unsloth-dynamic-2.0-ggufs<a href>/docs/models/tutorials
Not allowed hosthttps://github.com/unslothai/notebooks<a href>/docs/get-started/unsloth-notebooks
Not allowed hosthttps://github.com/unslothai/notebooks/<a href>/docs/get-started/unsloth-notebooks
Not allowed hosthttps://github.com/unslothai/notebooks/tree/main/python_scripts<a href>/docs/basics/multi-gpu-training-with-unsloth
Not allowed hosthttps://github.com/unslothai/unsloth<a href>/docs
Not allowed hosthttps://github.com/unslothai/unsloth-zoo/blob/e705f7cb50aa3470a0b6e…3/unsloth_zoo/rl_replacements.py<a href>/docs/get-started/reinforcement-learning-rl-guide/advanced-rl-documentation
Not allowed hosthttps://github.com/unslothai/unsloth/issues<a href>/docs/new/embedding-finetuning
Not allowed hosthttps://github.com/unslothai/unsloth/issues/2435<a href>/docs/basics/multi-gpu-training-with-unsloth
Not allowed hosthttps://github.com/unslothai/unsloth/issues/3035<a href>/docs/get-started/fine-tuning-llms-guide/lora-hyperparameters-guide
Not allowed hosthttps://github.com/unslothai/unsloth/issues/4444<a href>/docs/new/studio
Not allowed hosthttps://github.com/unslothai/unsloth/pull/238<a href>/docs/blog/3x-faster-training-packing
Not allowed hosthttps://github.com/unslothai/unsloth/pull/2752<a href>/docs/get-started/reinforcement-learning-rl-guide/vision-reinforcement-learning-vlm-rl
Not allowed hosthttps://github.com/unslothai/unsloth/pull/3440<a href>/docs/get-started/reinforcement-learning-rl-guide/fp8-reinforcement-learning
Not allowed hosthttps://github.com/unslothai/unsloth/pull/3719<a href>/docs/new/embedding-finetuning
Not allowed hosthttps://github.com/unslothai/unsloth/releases<a href>/docs/get-started/install/updating
Not allowed hosthttps://github.com/unslothai/unsloth/releases/tag/February-2026<a href>/docs/new/faster-moe
Not allowed hosthttps://github.com/unslothai/unsloth/wiki<a href>/docs/get-started/fine-tuning-llms-guide/tutorial-how-to-finetune-llama-3-and-use-in-ollama
Not allowed hosthttps://github.com/unslothai/unsloth?tab=AGPL-3.0-2-ov-file<a href>/docs/new/studio
Not allowed hosthttps://github.com/unslothai/unsloth?tab=Apache-2.0-1-ov-file<a href>/docs/new/studio
Not allowed hosthttps://github.com/vllm-project/llm-compressor<a href>/docs/get-started/reinforcement-learning-rl-guide/fp8-reinforcement-learning
Not allowed hosthttps://github.com/vllm-project/vllm<a href>/docs/get-started/reinforcement-learning-rl-guide/fp8-reinforcement-learning
Not allowed hosthttps://github.com/vllm-project/vllm/<a href>/docs/get-started/reinforcement-learning-rl-guide
Not allowed hosthttps://github.com/vllm-project/vllm/pull/21146<a href>/docs/get-started/reinforcement-learning-rl-guide/memory-efficient-rl
Not allowed hosthttps://hub.docker.com/r/ai<a href>/docs/models/tutorials/how-to-run-llms-with-docker
Not allowed hosthttps://hub.docker.com/r/ai/gpt-oss<a href>/docs/models/tutorials/how-to-run-llms-with-docker
Not allowed hosthttps://hub.docker.com/r/unsloth/unsloth<a href>/docs
Not allowed hosthttps://huggingface.co/Orenguteng<a href>/docs/models/gpt-oss-how-to-run-and-fine-tune/tutorial-how-to-fine-tune-gpt-oss
Not allowed hosthttps://huggingface.co/Qwen/QwQ-32B<a href>/docs/models/tutorials/qwq-32b-how-to-run-effectively
Not allowed hosthttps://huggingface.co/Qwen/Qwen3-32B/blob/main/config.json<a href>/docs/new/faster-moe
Not allowed hosthttps://huggingface.co/Qwen/Qwen3-8B<a href>/docs/basics/multi-gpu-training-with-unsloth/ddp
Not allowed hosthttps://huggingface.co/WBB2500<a href>/docs/models/tutorials/gemma-3-how-to-run-and-fine-tune/gemma-3n-how-to-run-and-fine-tune
Not allowed hosthttps://huggingface.co/blog/unsloth-trl<a href>/docs/basics/unsloth-benchmarks
Not allowed hosthttps://huggingface.co/collections/unsloth/embedding-models<a href>/docs/new/embedding-finetuning
Not allowed hosthttps://huggingface.co/collections/unsloth/gemma-3-67d12b7e8816ec6efa7e4e5b<a href>/docs/models/tutorials/gemma-3-how-to-run-and-fine-tune
Not allowed hosthttps://huggingface.co/collections/unsloth/gemma-3n-685d3874830e49e1c93f9339<a href>/docs/models/tutorials/gemma-3-how-to-run-and-fine-tune/gemma-3n-how-to-run-and-fine-tune
Not allowed hosthttps://huggingface.co/collections/unsloth/ministral-3<a href>/docs/models/tutorials/ministral-3
Not allowed hosthttps://huggingface.co/collections/unsloth/text-to-speech-tts-models-68007ab12522e96be1e02155<a href>/docs/basics/text-to-speech-tts-fine-tuning
Not allowed hosthttps://huggingface.co/collections/unsloth/unsloth-diffusion-ggufs<a href>/docs/models/tutorials/qwen-image-2512
Not allowed hosthttps://huggingface.co/datasets/AI4Math/MathVista<a href>/docs/get-started/reinforcement-learning-rl-guide/vision-reinforcement-learning-vlm-rl
Not allowed hosthttps://huggingface.co/datasets/HuggingFaceH4/Multilingual-Thinking<a href>/docs/models/gpt-oss-how-to-run-and-fine-tune/tutorial-how-to-fine-tune-gpt-oss
Not allowed hosthttps://huggingface.co/datasets/Jinsaryko/Elise<a href>/docs/basics/text-to-speech-tts-fine-tuning
Not allowed hosthttps://huggingface.co/datasets/MrDragonFox/Elise<a href>/docs/basics/text-to-speech-tts-fine-tuning
Not allowed hosthttps://huggingface.co/datasets/openai/gsm8k<a href>/docs/get-started/reinforcement-learning-rl-guide/tutorial-train-your-own-reasoning-model-with-grpo
Not allowed hosthttps://huggingface.co/datasets/vicgalle/alpaca-gpt4<a href>/docs/get-started/fine-tuning-llms-guide/tutorial-how-to-finetune-llama-3-and-use-in-ollama
Not allowed hosthttps://huggingface.co/datasets/vicgalle/alpaca-gpt4.<a href>/docs/get-started/fine-tuning-llms-guide/datasets-guide
Not allowed hosthttps://huggingface.co/datasets/yahma/alpaca-cleaned<a href>/docs/basics/multi-gpu-training-with-unsloth/ddp
Not allowed hosthttps://huggingface.co/deepseek-ai/DeepSeek-R1-0528<a href>/docs/models/tutorials/deepseek-r1-0528-how-to-run-locally
Not allowed hosthttps://huggingface.co/deepseek-ai/DeepSeek-V3-0324<a href>/docs/models/tutorials/deepseek-v3-0324-how-to-run-locally
Not allowed hosthttps://huggingface.co/docs/trl/main/en/dpo_trainer<a href>/docs/get-started/reinforcement-learning-rl-guide/preference-dpo-orpo-and-kto
Not allowed hosthttps://huggingface.co/docs/trl/main/en/sft_trainer<a href>/docs/get-started/reinforcement-learning-rl-guide/preference-dpo-orpo-and-kto
Not allowed hosthttps://huggingface.co/electroglyph<a href>/docs/new/embedding-finetuning
Not allowed hosthttps://huggingface.co/metascroy/Qwen3-4B-int8-int4-unsloth<a href>/docs/blog/quantization-aware-training-qat
Not allowed hosthttps://huggingface.co/models?library=sentence-transformers<a href>/docs/new/embedding-finetuning
Not allowed hosthttps://huggingface.co/moonshotai/Kimi-K2-Instruct<a href>/docs/models/tutorials/kimi-k2-thinking-how-to-run-locally
Not allowed hosthttps://huggingface.co/moonshotai/Kimi-K2-Instruct/discussions/28<a href>/docs/models/tutorials/kimi-k2-thinking-how-to-run-locally
Not allowed hosthttps://huggingface.co/moonshotai/Kimi-K2-Thinking<a href>/docs/models/tutorials/kimi-k2-thinking-how-to-run-locally
Not allowed hosthttps://huggingface.co/moonshotai/Kimi-K2-Thinking/discussions/12<a href>/docs/models/tutorials/kimi-k2-thinking-how-to-run-locally
Not allowed hosthttps://huggingface.co/ngxson/Devstral-Small-Vision-2505-GGUF<a href>/docs/models/tutorials/devstral-how-to-run-and-fine-tune
Not allowed hosthttps://huggingface.co/openai/gpt-oss-20b/discussions/94/files<a href>/docs/models/gpt-oss-how-to-run-and-fine-tune
Not allowed hosthttps://huggingface.co/pytorch/gemma-3-27b-it-FP8/blob/main/README.md<a href>/docs/get-started/reinforcement-learning-rl-guide/fp8-reinforcement-learning
Not allowed hosthttps://huggingface.co/settings/tokens<a href>/docs/get-started/fine-tuning-llms-guide
Not allowed hosthttps://huggingface.co/strangervisionhf<a href>/docs/models/tutorials/deepseek-ocr-how-to-run-and-fine-tune
Not allowed hosthttps://huggingface.co/unsloth<a href>/docs
Not allowed hosthttps://huggingface.co/unsloth/DeepSeek-OCR<a href>/docs/models/tutorials/deepseek-ocr-how-to-run-and-fine-tune
Not allowed hosthttps://huggingface.co/unsloth/DeepSeek-OCR-2<a href>/docs/models/tutorials/deepseek-ocr-2
Not allowed hosthttps://huggingface.co/unsloth/DeepSeek-R1<a href>/docs/get-started/unsloth-model-catalog
Not allowed hosthttps://huggingface.co/unsloth/DeepSeek-R1-0528<a href>/docs/get-started/unsloth-model-catalog
Not allowed hosthttps://huggingface.co/unsloth/DeepSeek-R1-0528-BF16<a href>/docs/models/tutorials/deepseek-r1-0528-how-to-run-locally
Not allowed hosthttps://huggingface.co/unsloth/DeepSeek-R1-0528-GGUF<a href>/docs/get-started/unsloth-model-catalog
Not allowed hosthttps://huggingface.co/unsloth/DeepSeek-R1-0528-GGUF/tree/main/IQ4_NL<a href>/docs/models/tutorials/deepseek-r1-0528-how-to-run-locally
Not allowed hosthttps://huggingface.co/unsloth/DeepSeek-R1-0528-GGUF/tree/main/Q4_1<a href>/docs/models/tutorials/deepseek-r1-0528-how-to-run-locally
Not allowed hosthttps://huggingface.co/unsloth/DeepSeek-R1-0528-GGUF/tree/main/UD-IQ1_M<a href>/docs/models/tutorials/deepseek-r1-0528-how-to-run-locally
Not allowed hosthttps://huggingface.co/unsloth/DeepSeek-R1-0528-GGUF/tree/main/UD-IQ1_S<a href>/docs/models/tutorials/deepseek-r1-0528-how-to-run-locally
Not allowed hosthttps://huggingface.co/unsloth/DeepSeek-R1-0528-GGUF/tree/main/UD-IQ2_XXS<a href>/docs/models/tutorials/deepseek-r1-0528-how-to-run-locally
Not allowed hosthttps://huggingface.co/unsloth/DeepSeek-R1-0528-GGUF/tree/main/UD-IQ3_XXS<a href>/docs/models/tutorials/deepseek-r1-0528-how-to-run-locally
Not allowed hosthttps://huggingface.co/unsloth/DeepSeek-R1-0528-GGUF/tree/main/UD-Q2_K_XL<a href>/docs/models/tutorials/deepseek-r1-0528-how-to-run-locally
Not allowed hosthttps://huggingface.co/unsloth/DeepSeek-R1-0528-GGUF/tree/main/UD-Q3_K_XL<a href>/docs/models/tutorials/deepseek-r1-0528-how-to-run-locally
Not allowed hosthttps://huggingface.co/unsloth/DeepSeek-R1-0528-GGUF/tree/main/UD-Q4_K_XL<a href>/docs/models/tutorials/deepseek-r1-0528-how-to-run-locally
Not allowed hosthttps://huggingface.co/unsloth/DeepSeek-R1-0528-GGUF/tree/main/UD-Q5_K_XL<a href>/docs/models/tutorials/deepseek-r1-0528-how-to-run-locally
Not allowed hosthttps://huggingface.co/unsloth/DeepSeek-R1-0528-Qwen3-8B<a href>/docs/get-started/unsloth-model-catalog
Not allowed hosthttps://huggingface.co/unsloth/DeepSeek-R1-0528-Qwen3-8B-GGUF<a href>/docs/get-started/unsloth-model-catalog
Not allowed hosthttps://huggingface.co/unsloth/DeepSeek-R1-0528-Qwen3-8B-unsloth-bnb-4bit<a href>/docs/get-started/unsloth-model-catalog
Not allowed hosthttps://huggingface.co/unsloth/DeepSeek-R1-Distill-Llama-70B<a href>/docs/get-started/unsloth-model-catalog
Not allowed hosthttps://huggingface.co/unsloth/DeepSeek-R1-Distill-Llama-70B-GGUF<a href>/docs/get-started/unsloth-model-catalog
Not allowed hosthttps://huggingface.co/unsloth/DeepSeek-R1-Distill-Llama-70B-bnb-4bit<a href>/docs/get-started/unsloth-model-catalog
Not allowed hosthttps://huggingface.co/unsloth/DeepSeek-R1-Distill-Llama-8B<a href>/docs/get-started/unsloth-model-catalog
Not allowed hosthttps://huggingface.co/unsloth/DeepSeek-R1-Distill-Llama-8B-GGUF<a href>/docs/get-started/unsloth-model-catalog
You have reached the hard limit of 200 rows as a protection against very large output or exhausted memory. You can change this with --rows-limit.
No rows found, please edit your search term.

External URLs

811 external URL(s)
Found 200 row(s).
External URLPages 🔽Found on URL (max 5)
http://localhost:8888/1/docs/get-started/install/windows-installation
http://twitter.com/UnslothAI1/docs
https://ai.google.dev/gemma/docs/gemma-3n1/docs/models/tutorials/gemma-3-how-to-run-and-fine-tune/gemma-3n-how-to-run-and-fine-tune
https://ai.meta.com/blog/executorch-reality-labs-on-device-ai/1/docs/basics/inference-and-deployment/deploy-llms-phone
https://aider.chat/docs/leaderboards/1/docs/basics/unsloth-dynamic-2.0-ggufs/unsloth-dynamic-ggufs-on-aider-polyglot
https://api-docs.deepseek.com/quick_start/parameter_settings1/docs/models/tutorials/deepseek-v3-0324-how-to-run-locally
https://app.gitbook.com/o/HpyELzcNe0topgVLGCZY/s/xhOjnexMCB3dmuQFQ2…evstral-how-to-run-and-fine-tune1/docs/basics/unsloth-dynamic-2.0-ggufs/unsloth-dynamic-ggufs-on-aider-polyglot
https://blogs.nvidia.com/blog/rtx-ai-garage-fine-tuning-unsloth-dgx-spark/1/docs/models/nemotron-3
https://brew.sh/1/docs/get-started/fine-tuning-for-beginners/unsloth-requirements
https://code.claude.com/docs/en/vs-code1/docs/basics/claude-code
https://code.visualstudio.com/1/docs/get-started/install/vs-code
https://colab.research.google.com/drive/12Uw8y5beLzPtx11mCWCYyh9Z_PEHHdId?usp=sharing1/docs/get-started/reinforcement-learning-rl-guide/memory-efficient-rl
https://colab.research.google.com/drive/15F1xyn8497_dUbxZP4zWmPZ3PJx1Oymv?usp=sharing1/docs/get-started/unsloth-notebooks
https://colab.research.google.com/drive/18CssBY5C0mStnLvu2Hlt4aFLoPugRG0K?usp=sharing1/docs/get-started/reinforcement-learning-rl-guide/memory-efficient-rl
https://colab.research.google.com/drive/1GwTlaP5CLsW-BcE1LqZWkz6S8VTWYdJ2?usp=sharing1/docs/get-started/reinforcement-learning-rl-guide/memory-efficient-rl
https://colab.research.google.com/drive/1Hs5oQDovOay4mFA6Y9lQhVJ8TnbFLFh2?usp=sharing1/docs/get-started/reinforcement-learning-rl-guide/preference-dpo-orpo-and-kto
https://colab.research.google.com/drive/1IuSUNzEBTiURK-vbTQuRDuUl0Ya2pz2t?usp=sharing1/docs/get-started/reinforcement-learning-rl-guide/memory-efficient-rl
https://colab.research.google.com/drive/1MRgGtLWuZX4ypSfGguFgC-IblTvO2ivM?usp=sharing1/docs/get-started/unsloth-notebooks
https://colab.research.google.com/drive/1RY7HwpZ0luJT70OyLJ6zXKZQ2COdT9QJ?usp=sharing1/docs/get-started/reinforcement-learning-rl-guide/memory-efficient-rl
https://colab.research.google.com/drive/1VYkncZMfGFkeCEgN2IzbZIKEDkyQuJAS?usp=sharing1/docs/basics/chat-templates
https://colab.research.google.com/drive/1WZDi7APtQ9VsvOrQSSC5DDtxq159j8iZ?usp=sharing1/docs/basics/chat-templates
https://colab.research.google.com/drive/1aqlNQi7MMJbynFDyOQteD2t0yVfjb9Zh?usp=sharing1/docs/basics/inference-and-deployment/unsloth-inference
https://colab.research.google.com/drive/1njCCbE1YVal9xC83hjdo2hiGItpY_D6t?usp=sharing1/docs/get-started/unsloth-notebooks
https://colab.research.google.com/drive/1q0TOUychygfreI2wKpg51sqnRhs5cYnX?usp=sharing1/docs/get-started/reinforcement-learning-rl-guide/memory-efficient-rl
https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.1_(8B)-Alpaca.ipynb1/docs/get-started/fine-tuning-llms-guide/datasets-guide
https://colab.research.google.com/github/unslothai/notebooks/blob/m…n/nb/Llama3.2_(11B)-Vision.ipynb1/docs/basics/vision-fine-tuning
https://developer.apple.com/documentation/safari-developer-tools/adding-additional-simulators1/docs/basics/inference-and-deployment/deploy-llms-phone
https://developer.apple.com/documentation/xcode/downloading-and-ins…ling-additional-xcode-components1/docs/basics/inference-and-deployment/deploy-llms-phone
https://developer.nvidia.com/blog/train-an-llm-on-an-nvidia-blackwe…sktop-with-unsloth-and-scale-it/1/docs/blog/fine-tuning-llms-with-blackwell-rtx-50-series-and-unsloth
https://developer.nvidia.com/cuda-gpus1/docs/get-started/fine-tuning-for-beginners/unsloth-requirements
https://developers.openai.com/codex/windows/1/docs/basics/codex
https://discord.com/channels/1131200896827654144/14082936920743609141/docs/basics/unsloth-dynamic-2.0-ggufs/unsloth-dynamic-ggufs-on-aider-polyglot
https://discord.com/invite/unsloth1/docs
https://discord.gg/ollama1/docs/get-started/fine-tuning-llms-guide/tutorial-how-to-finetune-llama-3-and-use-in-ollama
https://discord.gg/unsloth1/docs
https://docs.anaconda.com/miniconda/1/docs/get-started/install/conda-install
https://docs.docker.com/ai/model-runner/get-started/1/docs/models/tutorials/how-to-run-llms-with-docker
https://docs.docker.com/desktop/1/docs/get-started/install/windows-installation
https://docs.docker.com/engine/install/1/docs/get-started/install/windows-installation
https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html1/docs/get-started/install/windows-installation
https://docs.openwebui.com/tutorials/integrations/deepseekr1-dynamic/1/docs/models/tutorials/deepseek-r1-0528-how-to-run-locally
https://docs.oracle.com/en/java/javase/25/install/overview-jdk-installation.html1/docs/basics/inference-and-deployment/deploy-llms-phone
https://docs.python.org/3/tutorial/venv.html1/docs/basics/multi-gpu-training-with-unsloth/ddp
https://docs.pytorch.org/docs/main/generated/torch.nn.functional.grouped_mm.html1/docs/new/faster-moe
https://docs.pytorch.org/docs/stable/elastic/run.html1/docs/basics/multi-gpu-training-with-unsloth/ddp
https://docs.sglang.ai/advanced_features/server_arguments.html1/docs/basics/inference-and-deployment/sglang-guide
https://docs.unsloth.ai/ai-engineers-20251/docs/get-started/reinforcement-learning-rl-guide
https://docs.unsloth.ai/basics/chat-templates1/docs/get-started/fine-tuning-llms-guide/datasets-guide
https://docs.unsloth.ai/basics/deepseek-r1-0528-how-to-run-locally1/docs/models/tutorials/deepseek-r1-how-to-run-locally
https://docs.unsloth.ai/basics/gpt-oss1/docs
https://docs.unsloth.ai/basics/reinforcement-learning-guide/tutoria…ur-own-reasoning-model-with-grpo1/docs/get-started/reinforcement-learning-rl-guide
https://docs.unsloth.ai/basics/troubleshooting-and-faqs1/docs/models/tutorials/kimi-k2-thinking-how-to-run-locally
https://docs.unsloth.ai/basics/tutorial-how-to-finetune-llama-3-and-use-in-ollama1/docs/get-started/fine-tuning-llms-guide
https://docs.unsloth.ai/basics/unsloth-dynamic-2.0-ggufs1/docs/get-started/unsloth-model-catalog
https://docs.unsloth.ai/get-started/install-and-update/windows-installation1/docs/get-started/fine-tuning-for-beginners/unsloth-requirements
https://docs.unsloth.ai/get-started/unsloth-notebooks1/docs/blog/fine-tuning-llms-with-nvidia-dgx-spark-and-unsloth
https://docs.unsloth.ai/models/nemotron-31/docs/models/nemotron-3
https://docs.unsloth.ai/new/gpt-oss-how-to-run-and-fine-tune1/docs
https://docs.vllm.ai/en/latest/features/quantization/torchao.html1/docs/models/tutorials/magistral-how-to-run-and-fine-tune
https://docs.vllm.ai/en/latest/features/sleep_mode.html1/docs/get-started/reinforcement-learning-rl-guide/memory-efficient-rl
https://docs.vllm.ai/en/latest/getting_started/installation/gpu/1/docs/get-started/install/intel
https://docs.vllm.ai/en/stable/getting_started/installation1/docs/basics/inference-and-deployment/vllm-guide
https://en.wikipedia.org/wiki/Bfloat16_floating-point_format1/docs/models/tutorials/gemma-3-how-to-run-and-fine-tune
https://en.wikipedia.org/wiki/Proximal_policy_optimization1/docs/get-started/reinforcement-learning-rl-guide
https://en.wikipedia.org/wiki/Reinforcement_learning_from_human_feedback1/docs/get-started/reinforcement-learning-rl-guide
https://en.wikipedia.org/wiki/Reward_hacking1/docs/models/gpt-oss-how-to-run-and-fine-tune/gpt-oss-reinforcement-learning
https://engineering.fb.com/2025/07/28/android/executorch-on-device-ml-meta-family-of-apps/1/docs/basics/inference-and-deployment/deploy-llms-phone
https://gitbook.com/docs/published-documentation/custom-domain/configure-dns1/docs
https://github.com/Comfy-Org/ComfyUI1/docs/models/tutorials/qwen-image-2512
https://github.com/Dao-AILab/flash-attention/issues/17971/docs/models/gpt-oss-how-to-run-and-fine-tune/gpt-oss-reinforcement-learning
https://github.com/Mintplex-Labs/anything-llm1/docs/blog/fine-tuning-llms-with-nvidia-dgx-spark-and-unsloth
https://github.com/NVIDIA-NeMo/DataDesigner1/docs/new/studio
https://github.com/NVIDIA-NeMo/Gym/pull/4921/docs/models/nemotron-3
https://github.com/QwenLM/Qwen2.5-VL/issues/7591/docs/get-started/reinforcement-learning-rl-guide/vision-reinforcement-learning-vlm-rl
https://github.com/QwenLM/Qwen3-VL/tree/main?tab=readme-ov-file1/docs/models/qwen3-how-to-run-and-fine-tune/qwen3-vl-how-to-run-and-fine-tune
https://github.com/Vali-98/ChatterUI1/docs/models/tutorials/gemma-3-how-to-run-and-fine-tune
https://github.com/canopyai/Orpheus-TTS1/docs/basics/text-to-speech-tts-fine-tuning
https://github.com/comfyanonymous/ComfyUI1/docs/blog/comfyui
https://github.com/deepspeedai/DeepSpeed/pull/76641/docs/blog/500k-context-length-fine-tuning
https://github.com/docker/model-runner1/docs/models/tutorials/how-to-run-llms-with-docker
https://github.com/gabriellarson1/docs/models/tutorials/kimi-k2-thinking-how-to-run-locally
https://github.com/ggml-org/llama.cpp/blob/12ee1763a6f6130ce820a366d220bbadff54b818/common/chat.cpp1/docs/basics/inference-and-deployment/llama-server-and-openai-endpoint
https://github.com/ggml-org/llama.cpp/blob/55c509daf51d25bfaee9c8b8…bff103d4473b/src/llama-vocab.cpp1/docs/models/tutorials/kimi-k2-thinking-how-to-run-locally
https://github.com/ggml-org/llama.cpp/issues/146421/docs/models/tutorials/kimi-k2-thinking-how-to-run-locally
https://github.com/ggml-org/llama.cpp/issues/183231/docs/basics/inference-and-deployment/llama-server-and-openai-endpoint
https://github.com/ggml-org/llama.cpp/pull/128891/docs
https://github.com/ggml-org/llama.cpp/pull/143631/docs/models/gpt-oss-how-to-run-and-fine-tune
https://github.com/ggml-org/llama.cpp/pull/144501/docs/models/tutorials/gemma-3-how-to-run-and-fine-tune/gemma-3n-how-to-run-and-fine-tune
https://github.com/ggml-org/llama.cpp/pull/146541/docs/models/tutorials/kimi-k2-thinking-how-to-run-locally
https://github.com/ggml-org/llama.cpp/pull/155391/docs/models/tutorials/grok-2
https://github.com/ggml-org/llama.cpp/pull/179451/docs/models/tutorials/devstral-2
https://github.com/ggml-org/llama.cpp/tree/master/examples/parallel1/docs/models/gpt-oss-how-to-run-and-fine-tune
https://github.com/googlecolab/colab-vscode/issues/2001/docs/get-started/install/vs-code
https://github.com/googlecolab/colabtools/issues/34091/docs/basics/inference-and-deployment/unsloth-inference
https://github.com/huggingface/sentence-transformers1/docs/new/embedding-finetuning
https://github.com/huggingface/transformers/blob/v4.57.6/src/transf…/qwen3_moe/modeling_qwen3_moe.py1/docs/new/faster-moe
https://github.com/huggingface/transformers/blob/v5.0.0/src/transfo…/qwen3_moe/modeling_qwen3_moe.py1/docs/new/faster-moe
https://github.com/huggingface/transformers/pull/401971/docs/models/gpt-oss-how-to-run-and-fine-tune/long-context-gpt-oss-training
https://github.com/meta-pytorch/attention-gym1/docs/models/gpt-oss-how-to-run-and-fine-tune/long-context-gpt-oss-training
https://github.com/meta-pytorch/attention-gym/issues/151/docs/models/gpt-oss-how-to-run-and-fine-tune/gpt-oss-reinforcement-learning
https://github.com/mxyng1/docs/models/tutorials/gemma-3-how-to-run-and-fine-tune/gemma-3n-how-to-run-and-fine-tune
https://github.com/ollama/ollama1/docs/get-started/fine-tuning-llms-guide/tutorial-how-to-finetune-llama-3-and-use-in-ollama
https://github.com/ollama/ollama/blob/main/docs/faq.md1/docs/models/tutorials/devstral-how-to-run-and-fine-tune
https://github.com/openai/codex1/docs/basics/codex
https://github.com/openai/harmony1/docs/models/gpt-oss-how-to-run-and-fine-tune
https://github.com/pytorch/ao1/docs/get-started/reinforcement-learning-rl-guide/fp8-reinforcement-learning
https://github.com/pytorch/executorch1/docs/blog/quantization-aware-training-qat
https://github.com/pytorch/executorch/1/docs/basics/inference-and-deployment/deploy-llms-phone
https://github.com/pytorch/executorch/tree/main/examples/models/gemma31/docs/models/tutorials/functiongemma
https://github.com/pytorch/pytorch/blob/main/RELEASE.md1/docs/get-started/install/windows-installation
https://github.com/sgl-project/sglang1/docs/basics/inference-and-deployment/sglang-guide
https://github.com/tatsu-lab/stanford_alpaca1/docs/get-started/fine-tuning-llms-guide/tutorial-how-to-finetune-llama-3-and-use-in-ollama
https://github.com/triton-lang/triton/blob/main/python/triton_kerne…atmul_ogs_details/_matmul_ogs.py1/docs/models/gpt-oss-how-to-run-and-fine-tune
https://github.com/unslothai/docs/blob/main/basics/unsloth-dynamic-2.0-ggufs1/docs/models/tutorials
https://github.com/unslothai/notebooks1/docs/get-started/unsloth-notebooks
https://github.com/unslothai/notebooks/1/docs/get-started/unsloth-notebooks
https://github.com/unslothai/notebooks/tree/main/python_scripts1/docs/basics/multi-gpu-training-with-unsloth
https://github.com/unslothai/unsloth1/docs
https://github.com/unslothai/unsloth-zoo/blob/e705f7cb50aa3470a0b6e…3/unsloth_zoo/rl_replacements.py1/docs/get-started/reinforcement-learning-rl-guide/advanced-rl-documentation
https://github.com/unslothai/unsloth/issues1/docs/new/embedding-finetuning
https://github.com/unslothai/unsloth/issues/24351/docs/basics/multi-gpu-training-with-unsloth
https://github.com/unslothai/unsloth/issues/30351/docs/get-started/fine-tuning-llms-guide/lora-hyperparameters-guide
https://github.com/unslothai/unsloth/issues/44441/docs/new/studio
https://github.com/unslothai/unsloth/pull/2381/docs/blog/3x-faster-training-packing
https://github.com/unslothai/unsloth/pull/27521/docs/get-started/reinforcement-learning-rl-guide/vision-reinforcement-learning-vlm-rl
https://github.com/unslothai/unsloth/pull/34401/docs/get-started/reinforcement-learning-rl-guide/fp8-reinforcement-learning
https://github.com/unslothai/unsloth/pull/37191/docs/new/embedding-finetuning
https://github.com/unslothai/unsloth/releases1/docs/get-started/install/updating
https://github.com/unslothai/unsloth/releases/tag/February-20261/docs/new/faster-moe
https://github.com/unslothai/unsloth/wiki1/docs/get-started/fine-tuning-llms-guide/tutorial-how-to-finetune-llama-3-and-use-in-ollama
https://github.com/unslothai/unsloth?tab=AGPL-3.0-2-ov-file1/docs/new/studio
https://github.com/unslothai/unsloth?tab=Apache-2.0-1-ov-file1/docs/new/studio
https://github.com/vllm-project/llm-compressor1/docs/get-started/reinforcement-learning-rl-guide/fp8-reinforcement-learning
https://github.com/vllm-project/vllm1/docs/get-started/reinforcement-learning-rl-guide/fp8-reinforcement-learning
https://github.com/vllm-project/vllm/1/docs/get-started/reinforcement-learning-rl-guide
https://github.com/vllm-project/vllm/pull/211461/docs/get-started/reinforcement-learning-rl-guide/memory-efficient-rl
https://hub.docker.com/r/ai1/docs/models/tutorials/how-to-run-llms-with-docker
https://hub.docker.com/r/ai/gpt-oss1/docs/models/tutorials/how-to-run-llms-with-docker
https://hub.docker.com/r/unsloth/unsloth1/docs
https://huggingface.co/Orenguteng1/docs/models/gpt-oss-how-to-run-and-fine-tune/tutorial-how-to-fine-tune-gpt-oss
https://huggingface.co/Qwen/QwQ-32B1/docs/models/tutorials/qwq-32b-how-to-run-effectively
https://huggingface.co/Qwen/Qwen3-32B/blob/main/config.json1/docs/new/faster-moe
https://huggingface.co/Qwen/Qwen3-8B1/docs/basics/multi-gpu-training-with-unsloth/ddp
https://huggingface.co/WBB25001/docs/models/tutorials/gemma-3-how-to-run-and-fine-tune/gemma-3n-how-to-run-and-fine-tune
https://huggingface.co/blog/unsloth-trl1/docs/basics/unsloth-benchmarks
https://huggingface.co/collections/unsloth/embedding-models1/docs/new/embedding-finetuning
https://huggingface.co/collections/unsloth/gemma-3-67d12b7e8816ec6efa7e4e5b1/docs/models/tutorials/gemma-3-how-to-run-and-fine-tune
https://huggingface.co/collections/unsloth/gemma-3n-685d3874830e49e1c93f93391/docs/models/tutorials/gemma-3-how-to-run-and-fine-tune/gemma-3n-how-to-run-and-fine-tune
https://huggingface.co/collections/unsloth/ministral-31/docs/models/tutorials/ministral-3
https://huggingface.co/collections/unsloth/text-to-speech-tts-models-68007ab12522e96be1e021551/docs/basics/text-to-speech-tts-fine-tuning
https://huggingface.co/collections/unsloth/unsloth-diffusion-ggufs1/docs/models/tutorials/qwen-image-2512
https://huggingface.co/datasets/AI4Math/MathVista1/docs/get-started/reinforcement-learning-rl-guide/vision-reinforcement-learning-vlm-rl
https://huggingface.co/datasets/HuggingFaceH4/Multilingual-Thinking1/docs/models/gpt-oss-how-to-run-and-fine-tune/tutorial-how-to-fine-tune-gpt-oss
https://huggingface.co/datasets/Jinsaryko/Elise1/docs/basics/text-to-speech-tts-fine-tuning
https://huggingface.co/datasets/MrDragonFox/Elise1/docs/basics/text-to-speech-tts-fine-tuning
https://huggingface.co/datasets/openai/gsm8k1/docs/get-started/reinforcement-learning-rl-guide/tutorial-train-your-own-reasoning-model-with-grpo
https://huggingface.co/datasets/vicgalle/alpaca-gpt41/docs/get-started/fine-tuning-llms-guide/tutorial-how-to-finetune-llama-3-and-use-in-ollama
https://huggingface.co/datasets/vicgalle/alpaca-gpt4.1/docs/get-started/fine-tuning-llms-guide/datasets-guide
https://huggingface.co/datasets/yahma/alpaca-cleaned1/docs/basics/multi-gpu-training-with-unsloth/ddp
https://huggingface.co/deepseek-ai/DeepSeek-R1-05281/docs/models/tutorials/deepseek-r1-0528-how-to-run-locally
https://huggingface.co/deepseek-ai/DeepSeek-V3-03241/docs/models/tutorials/deepseek-v3-0324-how-to-run-locally
https://huggingface.co/docs/trl/main/en/dpo_trainer1/docs/get-started/reinforcement-learning-rl-guide/preference-dpo-orpo-and-kto
https://huggingface.co/docs/trl/main/en/sft_trainer1/docs/get-started/reinforcement-learning-rl-guide/preference-dpo-orpo-and-kto
https://huggingface.co/electroglyph1/docs/new/embedding-finetuning
https://huggingface.co/metascroy/Qwen3-4B-int8-int4-unsloth1/docs/blog/quantization-aware-training-qat
https://huggingface.co/models?library=sentence-transformers1/docs/new/embedding-finetuning
https://huggingface.co/moonshotai/Kimi-K2-Instruct1/docs/models/tutorials/kimi-k2-thinking-how-to-run-locally
https://huggingface.co/moonshotai/Kimi-K2-Instruct/discussions/281/docs/models/tutorials/kimi-k2-thinking-how-to-run-locally
https://huggingface.co/moonshotai/Kimi-K2-Thinking1/docs/models/tutorials/kimi-k2-thinking-how-to-run-locally
https://huggingface.co/moonshotai/Kimi-K2-Thinking/discussions/121/docs/models/tutorials/kimi-k2-thinking-how-to-run-locally
https://huggingface.co/ngxson/Devstral-Small-Vision-2505-GGUF1/docs/models/tutorials/devstral-how-to-run-and-fine-tune
https://huggingface.co/openai/gpt-oss-20b/discussions/94/files1/docs/models/gpt-oss-how-to-run-and-fine-tune
https://huggingface.co/pytorch/gemma-3-27b-it-FP8/blob/main/README.md1/docs/get-started/reinforcement-learning-rl-guide/fp8-reinforcement-learning
https://huggingface.co/settings/tokens1/docs/get-started/fine-tuning-llms-guide
https://huggingface.co/strangervisionhf1/docs/models/tutorials/deepseek-ocr-how-to-run-and-fine-tune
https://huggingface.co/unsloth1/docs
https://huggingface.co/unsloth/DeepSeek-OCR1/docs/models/tutorials/deepseek-ocr-how-to-run-and-fine-tune
https://huggingface.co/unsloth/DeepSeek-OCR-21/docs/models/tutorials/deepseek-ocr-2
https://huggingface.co/unsloth/DeepSeek-R11/docs/get-started/unsloth-model-catalog
https://huggingface.co/unsloth/DeepSeek-R1-05281/docs/get-started/unsloth-model-catalog
https://huggingface.co/unsloth/DeepSeek-R1-0528-BF161/docs/models/tutorials/deepseek-r1-0528-how-to-run-locally
https://huggingface.co/unsloth/DeepSeek-R1-0528-GGUF1/docs/get-started/unsloth-model-catalog
https://huggingface.co/unsloth/DeepSeek-R1-0528-GGUF/tree/main/IQ4_NL1/docs/models/tutorials/deepseek-r1-0528-how-to-run-locally
https://huggingface.co/unsloth/DeepSeek-R1-0528-GGUF/tree/main/Q4_11/docs/models/tutorials/deepseek-r1-0528-how-to-run-locally
https://huggingface.co/unsloth/DeepSeek-R1-0528-GGUF/tree/main/UD-IQ1_M1/docs/models/tutorials/deepseek-r1-0528-how-to-run-locally
https://huggingface.co/unsloth/DeepSeek-R1-0528-GGUF/tree/main/UD-IQ1_S1/docs/models/tutorials/deepseek-r1-0528-how-to-run-locally
https://huggingface.co/unsloth/DeepSeek-R1-0528-GGUF/tree/main/UD-IQ2_XXS1/docs/models/tutorials/deepseek-r1-0528-how-to-run-locally
https://huggingface.co/unsloth/DeepSeek-R1-0528-GGUF/tree/main/UD-IQ3_XXS1/docs/models/tutorials/deepseek-r1-0528-how-to-run-locally
https://huggingface.co/unsloth/DeepSeek-R1-0528-GGUF/tree/main/UD-Q2_K_XL1/docs/models/tutorials/deepseek-r1-0528-how-to-run-locally
https://huggingface.co/unsloth/DeepSeek-R1-0528-GGUF/tree/main/UD-Q3_K_XL1/docs/models/tutorials/deepseek-r1-0528-how-to-run-locally
https://huggingface.co/unsloth/DeepSeek-R1-0528-GGUF/tree/main/UD-Q4_K_XL1/docs/models/tutorials/deepseek-r1-0528-how-to-run-locally
https://huggingface.co/unsloth/DeepSeek-R1-0528-GGUF/tree/main/UD-Q5_K_XL1/docs/models/tutorials/deepseek-r1-0528-how-to-run-locally
https://huggingface.co/unsloth/DeepSeek-R1-0528-Qwen3-8B1/docs/get-started/unsloth-model-catalog
https://huggingface.co/unsloth/DeepSeek-R1-0528-Qwen3-8B-GGUF1/docs/get-started/unsloth-model-catalog
https://huggingface.co/unsloth/DeepSeek-R1-0528-Qwen3-8B-unsloth-bnb-4bit1/docs/get-started/unsloth-model-catalog
https://huggingface.co/unsloth/DeepSeek-R1-Distill-Llama-70B1/docs/get-started/unsloth-model-catalog
https://huggingface.co/unsloth/DeepSeek-R1-Distill-Llama-70B-GGUF1/docs/get-started/unsloth-model-catalog
https://huggingface.co/unsloth/DeepSeek-R1-Distill-Llama-70B-bnb-4bit1/docs/get-started/unsloth-model-catalog
https://huggingface.co/unsloth/DeepSeek-R1-Distill-Llama-8B1/docs/get-started/unsloth-model-catalog
https://huggingface.co/unsloth/DeepSeek-R1-Distill-Llama-8B-GGUF1/docs/get-started/unsloth-model-catalog
You have reached the hard limit of 200 rows as a protection against very large output or exhausted memory. You can change this with --rows-limit.
No rows found, please edit your search term.

Content types

Content typeURLs 🔽Total sizeTotal timeAvg timeStatus 20xStatus 30x
HTML575431 MB165 s288 ms 575 0
Redirect283 kB7.6 s270 ms 028

Content types (MIME types)

Content typeURLs 🔽Total sizeTotal timeAvg timeStatus 20xStatus 30x
text/html; charset=utf-8575431 MB165 s288 ms 575 0
text / html283 kB7.6 s270 ms 028

Source domains

DomainTotalsHTMLRedirect
unsloth.ai603 / 431MB / 173s575 / 431MB / 165s28 / 3kB / 7.6s

HTTP headers

Found 24 row(s).
Header 🔼OccursUniqueValues previewMin valueMax value
Age602-[ignored generic values]0 sec(s)15.3 hour(s)
Cache-Control6021public, max-age=0, must-revalidate
Cf-Cache-Status6021DYNAMIC
Cf-Ray603-[ignored generic values]
Content-Length1-[ignored generic values]0 B0 B
Content-Security-Policy6021default-src 'self' *; script-src 'self' 'unsafe-inline' 'unsafe-eval' *; style-s…gitbook.com *; frame-ancestors https: ;
Content-Type6032text/html; charset=utf-8 (575) / text/html (28)
Date603-[ignored generic values]2026-03-242026-03-24
Location2820+[see values below]
Nel6032[see values below]
Referrer-Policy6021no-referrer-when-downgrade
Report-To60320+[see values below]
Server6031cloudflare
Server-Timing60320+[see values below]
Strict-Transport-Security6021max-age=31536000
Vary6022[see values below]
X-Content-Type-Options6021nosniff
X-Gitbook-Route-Site6025unsloth.ai/docs/ (122) / unsloth.ai/docs/fr/ (120) / unsloth.ai/docs/jp/ (120) /…s/zh/ (120) / unsloth.ai/docs/de/ (120)
X-Gitbook-Route-Type6021static
X-Matched-Path6021/sites/static/[mode]/[siteURL]/[siteData]/[pagePath]
X-Nextjs-Prerender60211
X-Nextjs-Stale-Time6022300 (600) / 60 (2)
X-Vercel-Cache6022HIT (601) / REVALIDATED (1)
X-Vercel-Id60220+[see values below]
No rows found, please edit your search term.

HTTP header values

Found 105 row(s).
HeaderOccursValue
Cache-Control602public, max-age=0, must-revalidate
Cf-Cache-Status602DYNAMIC
Content-Security-Policy602default-src 'self' *; script-src 'self' 'unsafe-inline' 'unsafe-eval' *; style-src 'self' 'unsafe-inline' blob: *; img-src * 'self' blob: data:; connect-src *; font-src *; frame-src *; object-src 'none'; base-uri 'self' https://static-2v.gitbook.com; form-action 'self' https://static-2v.gitbook.com *; frame-ancestors https: ;
Content-Type575text/html; charset=utf-8
Content-Type28text / html
Location1/docs
Location1/docs/jp/xin-ji-neng/studio
Location1/docs/blog/3x-faster-training-packing
Location1/docs/fr
Location1/docs/models/tutorials/qwen3-coder-how-to-run-locally
Location1/docs/de
Location1/docs/zh/xin-zeng/studio
Location1/docs/models/nemotron-3
Location1/docs
Location1/docs/fr/nouveau/studio
Location1/docs/fr/notions-de-base/inference-and-deployment
Location1/docs/new/studio
Location1/docs/zh/ji-chu-zhi-shi/inference-and-deployment
Location1/docs/models/qwen3.5
Location1/docs/jp
Location1/docs/de/neu/studio
Location1/docs/jp/ji-ben/inference-and-deployment
Location1/docs/de/grundlagen/inference-and-deployment
Location1/docs/basics/inference-and-deployment
Location1/docs/zh
Nel602{"report_to":"cf-nel","success_fraction":0.0,"max_age":604800}
Nel1{"success_fraction":0,"report_to":"cf-nel","max_age":604800}
Referrer-Policy602no-referrer-when-downgrade
Report-To1{"group":"cf-nel","max_age":604800,"endpoints":[{"url":"https://a.nel.cloudflare.com/report/v4?s=RL4bEmK9AjfnJkFu%2FHT5tWy7yq3lT6F58%2F%2F9bsDZ2di%2B1cmSn12lT9tfxwUJgqRql8YpcLkWfDVIJDxM8Odz2dJ81iQx3gd6y49kJyN9hAYd6w8frg%3D%3D"}]}
Report-To1{"group":"cf-nel","max_age":604800,"endpoints":[{"url":"https://a.nel.cloudflare.com/report/v4?s=3J4Cp0YG2LKZ96f7Qm1gerkMSymFW2NHxwE%2FqmPsVDCBy3vnGLtTyfbfgg8l9fhnoc38ch%2BDguZ%2F5jKkWgWUAdxZbveMHQi0h7m0RHeeID%2Bf4UZ1Dw%3D%3D"}]}
Report-To1{"group":"cf-nel","max_age":604800,"endpoints":[{"url":"https://a.nel.cloudflare.com/report/v4?s=AmqxqBMProsEatGR%2F96HBH4DEzA0J%2Bd8hpI32DmBNgqPe1TnsCTyys767jNJEfauwyMrYljI%2FK5MC%2F0fd3oFjR2ARNtJ4KcxMC%2FkJyRgWzcQ5SUmzQ%3D%3D"}]}
Report-To1{"group":"cf-nel","max_age":604800,"endpoints":[{"url":"https://a.nel.cloudflare.com/report/v4?s=Ppcpf8HZ4HneyZubPdo6hBMRHoPMAwYhqHQVQxbOmRIW4x%2FJb0PPdm7RyQRWbVM1H%2BJCXPUdGrzPnZeW819YVDsL0EUfMEIV5AlIO%2Fy4MKvZYwcUXg%3D%3D"}]}
Report-To1{"group":"cf-nel","max_age":604800,"endpoints":[{"url":"https://a.nel.cloudflare.com/report/v4?s=6ifomwK0GSuBNc00zZTXKMskXyZM3ZF5g5pc%2BFGehYiab5BhQw8XaTKQ9Eock4HVn4PXcxZltai4WjxPyLQdDcXTJ9QvCZrmIg081O8kMaCLTK4AlA%3D%3D"}]}
Report-To1{"group":"cf-nel","max_age":604800,"endpoints":[{"url":"https://a.nel.cloudflare.com/report/v4?s=6LqRq%2FVDc4dthU2TD7G0Ld%2Ffkgb0jpUKNl83rSk6gGOHiEmrk01%2BqOkoD74bNtEGg0UQ%2BhJVGrV1tFFgbQMaiYhUM0WtMY9L0Qerj6PuCCoWzFvfhA%3D%3D"}]}
Report-To1{"group":"cf-nel","max_age":604800,"endpoints":[{"url":"https://a.nel.cloudflare.com/report/v4?s=xkvKZp0q08icftDZfAHguSMh4smXrBamuPu%2FgSCy2KoUSGsYccwsc%2FL%2Bqos4oheHSG5k1pL0cf1NosWIzJSv%2FuXBzHskUM3J7BFOiVsdcnSDTHZuWQ%3D%3D"}]}
Report-To1{"group":"cf-nel","max_age":604800,"endpoints":[{"url":"https://a.nel.cloudflare.com/report/v4?s=eU4Kk465J0%2FfCi%2FT00CIaJL18iLLzUZQu%2FcU1ByOkJHuABeJgLfej804yV2g1W4e5XudRDc9vX6ObeHHF8rFzWiy2ypAPr6RJSiV3gvpr6mBs9VywQ%3D%3D"}]}
Report-To1{"group":"cf-nel","max_age":604800,"endpoints":[{"url":"https://a.nel.cloudflare.com/report/v4?s=cFfCvSir7%2BXW%2BoOMEH5FbWl68oYwuN3emSi53%2FCY%2BqLrEO7ApG8GOS9L5OhYtqR3fEy1kqtD6sInOYkcfRGQMjtjwxR5J%2FmBDbqF%2FH%2FdCxvBrvrx8w%3D%3D"}]}
Report-To1{"group":"cf-nel","max_age":604800,"endpoints":[{"url":"https://a.nel.cloudflare.com/report/v4?s=G6xmF78%2FErJ2%2BmRYVZnWHVSQJb1Fwoc9mKHxq0WKb4YWRskFCkHxEwNbyOLkqo28Kp9bHAL5ESViwUsfAhau0jXSG1v14Hf3i3e80M9USYVpLQX4gA%3D%3D"}]}
Report-To1{"group":"cf-nel","max_age":604800,"endpoints":[{"url":"https://a.nel.cloudflare.com/report/v4?s=5LqBe1YG2fRSheiHWxa%2BbUUlYGQM3hJ3IthAlg7u1euXIQd%2FaAyarnl5sotlqrgEj4OMU9dBeRP%2F95teLGvQCLftBEqW2cAdvkwU92bZHv9Fnn9yKQ%3D%3D"}]}
Report-To1{"group":"cf-nel","max_age":604800,"endpoints":[{"url":"https://a.nel.cloudflare.com/report/v4?s=ZgyheQQ6mMnwQq8F6qEE7AC0npc0NEF%2BcH825bNtzuN2ZwnG7QUtbrSyPtbOXYH3UC2rzpWR4X2SBz0ygMmkkcMd0vPkBU2bHdqF%2BPDlpgRS4sAFsw%3D%3D"}]}
Report-To1{"group":"cf-nel","max_age":604800,"endpoints":[{"url":"https://a.nel.cloudflare.com/report/v4?s=sL4rl0RoP2c6X8i0xHvOQodgQXPwBbP5oNy%2Bbwmr3%2ByybU4WafNlN0fYTP1q17EVVVnJKMSNxf8HmgUFsDOeII4kayfCl0dKHTq7w5FOYBbvd54VbQ%3D%3D"}]}
Report-To1{"group":"cf-nel","max_age":604800,"endpoints":[{"url":"https://a.nel.cloudflare.com/report/v4?s=O7kOy66CHkmwofg77uqtR26w%2BQriXsIV26JGH5fELhPqpAfIJc701A6P5VISspB6QCArihd0JiVSDeuo8lM5Sw3WVKiq6WTMJ6B5FH2WwjQM8GIbzQ%3D%3D"}]}
Report-To1{"group":"cf-nel","max_age":604800,"endpoints":[{"url":"https://a.nel.cloudflare.com/report/v4?s=9DJc9u1NBIbJL6oPFUnPG95jWb95wiNUQf%2F1M9DqvaEr%2BMaJ10UlCeZZQSgkLMTamfYp%2BfQf%2BOvwY42DLwihNM%2FwLxhhByCm%2FMH8WelUzQPSV1PXUg%3D%3D"}]}
Report-To1{"group":"cf-nel","max_age":604800,"endpoints":[{"url":"https://a.nel.cloudflare.com/report/v4?s=MHv6xU%2B2T8XfnQyCtty3mAxshFIM6NHYqGkEhYI9f%2Bwq002xl%2BLO4xIelRsFOB%2Fszd7Iql7j6O9RFZVd0T9Sp9c53LlhmcrXfqinWWz0lqdLMLEe%2BA%3D%3D"}]}
Report-To1{"group":"cf-nel","max_age":604800,"endpoints":[{"url":"https://a.nel.cloudflare.com/report/v4?s=g916HPeqtqvcOc1Um37y6b35cKdUQu8Pd%2BMMwGHFGrGCu7y9GqMSiVMWf%2FSWpMqwe2tFpU6FpsslvJTg0rAz6MnXb%2B8oX8XWLlOq3uj3%2FhtXyX1%2Fbg%3D%3D"}]}
Report-To1{"group":"cf-nel","max_age":604800,"endpoints":[{"url":"https://a.nel.cloudflare.com/report/v4?s=zo9JKcgY6JFaGdTT3Ah3AIWB2JitQPpzobvfbXN%2FxFD%2Fhr7oZTnqvS0UroLzlUKbxDLa%2Bq1caaHDBC3QTICRZMRGDr2S6uM6guQGtYCuY7WJoFOUjQ%3D%3D"}]}
Report-To1{"group":"cf-nel","max_age":604800,"endpoints":[{"url":"https://a.nel.cloudflare.com/report/v4?s=bNchekXVDFEET8W5VSIrMbCZ0NvUWUKPl%2FPw3jhKAK%2F1ItZ%2BnDEE%2F%2BGN6Wbvf4ufIFw2h8AAgfcRpQzW2%2BMr5XbyTBM%2FlhFgOJgcqqSL2lL0QXXA9w%3D%3D"}]}
Report-To1{"group":"cf-nel","max_age":604800,"endpoints":[{"url":"https://a.nel.cloudflare.com/report/v4?s=epjf2XGbW4qppimAWTCfNelpSdKsz5xayzKan2DXQJnKX3LpE2ZagfKFWNzBdmkRxx%2BEI5eTHgG0eCe9IhRHlG%2F5Wi%2BxEwp0e4jMvlg7hIi9xA0cnA%3D%3D"}]}
Server603cloudflare
Server-Timing2cfCacheStatus;desc="DYNAMIC", cfEdge;dur=3,cfOrigin;dur=0,cfWorker;dur=55
Server-Timing1cfCacheStatus;desc="DYNAMIC", cfEdge;dur=3,cfOrigin;dur=0,cfWorker;dur=56
Server-Timing1cfCacheStatus;desc="DYNAMIC", cfEdge;dur=3,cfOrigin;dur=0,cfWorker;dur=100
Server-Timing1cfCacheStatus;desc="DYNAMIC", cfEdge;dur=4,cfOrigin;dur=0,cfWorker;dur=55
Server-Timing1cfCacheStatus;desc="DYNAMIC", cfEdge;dur=3,cfOrigin;dur=0,cfWorker;dur=54
Server-Timing1cfCacheStatus;desc="DYNAMIC", cfEdge;dur=3,cfOrigin;dur=0,cfWorker;dur=70
Server-Timing1cfCacheStatus;desc="DYNAMIC", cfEdge;dur=3,cfOrigin;dur=0,cfWorker;dur=59
Server-Timing1cfCacheStatus;desc="DYNAMIC", cfEdge;dur=3,cfOrigin;dur=0,cfWorker;dur=66
Server-Timing1cfCacheStatus;desc="DYNAMIC", cfEdge;dur=3,cfOrigin;dur=0,cfWorker;dur=64
Server-Timing1cfCacheStatus;desc="DYNAMIC", cfEdge;dur=3,cfOrigin;dur=0,cfWorker;dur=119
Server-Timing1cfCacheStatus;desc="DYNAMIC", cfEdge;dur=3,cfOrigin;dur=0,cfWorker;dur=78
Server-Timing1cfCacheStatus;desc="DYNAMIC", cfEdge;dur=3,cfOrigin;dur=0,cfWorker;dur=65
Server-Timing1cfCacheStatus;desc="DYNAMIC", cfEdge;dur=3,cfOrigin;dur=0,cfWorker;dur=87
Server-Timing1cfCacheStatus;desc="DYNAMIC", cfEdge;dur=3,cfOrigin;dur=0,cfWorker;dur=53
Server-Timing1cfCacheStatus;desc="DYNAMIC", cfEdge;dur=2,cfOrigin;dur=0,cfWorker;dur=252
Server-Timing1cfCacheStatus;desc="DYNAMIC", cfEdge;dur=6,cfOrigin;dur=0,cfWorker;dur=81
Server-Timing1cfCacheStatus;desc="DYNAMIC", cfEdge;dur=5,cfOrigin;dur=0,cfWorker;dur=156
Server-Timing1cfCacheStatus;desc="DYNAMIC", cfEdge;dur=3,cfOrigin;dur=0,cfWorker;dur=52
Server-Timing1cfCacheStatus;desc="DYNAMIC", cfEdge;dur=3,cfOrigin;dur=0,cfWorker;dur=50
Server-Timing1cfCacheStatus;desc="DYNAMIC", cfEdge;dur=3,cfOrigin;dur=0,cfWorker;dur=222
Strict-Transport-Security602max-age=31536000
Vary575RSC, Next-Router-State-Tree, Next-Router-Prefetch, Next-Router-Segment-Prefetch, accept-encoding
Vary27RSC, Next-Router-State-Tree, Next-Router-Prefetch, Next-Router-Segment-Prefetch
X-Content-Type-Options602nosniff
X-Gitbook-Route-Site122unsloth.ai / docs /
X-Gitbook-Route-Site120unsloth.ai / docs / fr /
X-Gitbook-Route-Site120unsloth.ai / docs / jp /
X-Gitbook-Route-Site120unsloth.ai / docs / zh /
X-Gitbook-Route-Site120unsloth.ai / docs / de /
X-Gitbook-Route-Type602static
X-Matched-Path602/sites/static/[mode]/[siteURL]/[siteData]/[pagePath]
X-Nextjs-Prerender6021
X-Nextjs-Stale-Time600300
X-Nextjs-Stale-Time260
X-Vercel-Cache601HIT
X-Vercel-Cache1REVALIDATED
X-Vercel-Id1fra1::iad1::kc57t-1774385279010-c6411b408231
X-Vercel-Id1fra1::iad1::htc9d-1774385279928-e3c6c9ba4dd5
X-Vercel-Id1fra1::iad1::gzrfb-1774385278641-afb4802a4acc
X-Vercel-Id1fra1::iad1::6x6k6-1774385278550-0bbe5b39e131
X-Vercel-Id1fra1::iad1::svg45-1774385278744-9cb035b6b5dd
X-Vercel-Id1fra1::iad1::znlgn-1774385279162-69dadbf5174d
X-Vercel-Id1fra1::iad1::znlgn-1774385279428-760b264e2fc5
X-Vercel-Id1fra1::iad1::wzjdr-1774385279610-b7449e5fb5c4
X-Vercel-Id1fra1::iad1::lqx4c-1774385280011-267e4c4debf0
X-Vercel-Id1fra1::iad1::jflp8-1774385280312-c21104be9a74
X-Vercel-Id1fra1::iad1::5894q-1774385279812-42451e11d27c
X-Vercel-Id1fra1::iad1::kg9kw-1774385280212-1cfe25d38a18
X-Vercel-Id1fra1::iad1::xj7st-1774385280262-e024e6fb1cce
X-Vercel-Id1fra1::iad1::9z4gf-1774385280592-ea6cdc09f784
X-Vercel-Id1fra1::iad1::wzjdr-1774385279553-fcceffb5cedb
X-Vercel-Id1fra1::iad1::cvm75-1774385279741-238c192b1ed5
X-Vercel-Id1fra1::iad1::gzrfb-1774385279233-d79ed3c34311
X-Vercel-Id1fra1::iad1::h5xgq-1774385279010-3c7eff65b1e7
X-Vercel-Id1fra1::iad1::qmqbk-1774385278353-4677594151e4
X-Vercel-Id1fra1::iad1::6w62x-1774385279337-50a8b94d5556
No rows found, please edit your search term.

HTTP Caching by content type (only from crawlable domains)

Content typeCache typeURLs 🔽AVG lifetimeMIN lifetimeMAX lifetime
HTMLCache-Control5750 s 0 s 0 s
RedirectCache-Control270 s 0 s 0 s
RedirectNo cache headers1---

HTTP Caching by domain

DomainCache typeURLs 🔽AVG lifetimeMIN lifetimeMAX lifetime
unsloth.aiCache-Control6020 s 0 s 0 s
unsloth.aiNo cache headers1---

HTTP Caching by domain and content type

DomainContent typeCache typeURLs 🔽AVG lifetimeMIN lifetimeMAX lifetime
unsloth.aiHTMLCache-Control5750 s 0 s 0 s
unsloth.aiRedirectCache-Control270 s 0 s 0 s
unsloth.aiRedirectNo cache headers1---

DNS info

DNS resolving tree
unsloth.ai
  IPv4: 172.67.69.105
  IPv4: 104.26.9.42
  IPv4: 104.26.8.42
  IPv6: 2606:4700:20::681a:82a
  IPv6: 2606:4700:20::ac43:4569
  IPv6: 2606:4700:20::681a:92a
DNS server: 127.0.0.53

SSL/TLS info

InfoText
IssuerC = US, O = Google Trust Services, CN = WE1
SubjectCN = unsloth.ai
Valid fromMar  3 19:40:17 2026 GMT (VALID already 21 day(s))
Valid toJun  1 20:40:09 2026 GMT (VALID still for 69 day(s))
Supported protocolsTLSv1.2, TLSv1.3
RAW certificate outputCertificate:
    Data:
        Version: 3 (0x2)
        Serial Number:
            48:fd:d2:4c:d7:3a:02:8a:0e:76:41:c5:17:e8:6c:b6
        Signature Algorithm: ecdsa-with-SHA256
        Issuer: C = US, O = Google Trust Services, CN = WE1
        Validity
            Not Before: Mar  3 19:40:17 2026 GMT
            Not After : Jun  1 20:40:09 2026 GMT
        Subject: CN = unsloth.ai
        Subject Public Key Info:
            Public Key Algorithm: id-ecPublicKey
                Public-Key: (256 bit)
                pub:
                    04:62:92:4b:88:4f:0d:37:78:b1:ae:51:8b:f5:79:
                    99:0d:ed:0f:c0:5a:61:9b:60:98:11:09:4c:f7:aa:
                    37:2a:bc:b1:b5:a0:66:53:cc:86:0e:69:4a:3a:4d:
                    d3:94:19:da:19:d7:69:3e:4d:ab:2c:47:3b:8f:12:
                    32:1c:ee:49:2f
                ASN1 OID: prime256v1
                NIST CURVE: P-256
        X509v3 extensions:
            X509v3 Key Usage: critical
                Digital Signature
            X509v3 Extended Key Usage: 
                TLS Web Server Authentication
            X509v3 Basic Constraints: critical
                CA:FALSE
            X509v3 Subject Key Identifier: 
                C9:FD:10:D9:CB:6B:8D:DD:73:24:1C:AD:95:0D:33:2E:81:1E:E0:09
            X509v3 Authority Key Identifier: 
                90:77:92:35:67:C4:FF:A8:CC:A9:E6:7B:D9:80:79:7B:CC:93:F9:38
            Authority Information Access: 
                OCSP - URI:http://o.pki.goog/s/we1/SP0
                CA Issuers - URI:http://i.pki.goog/we1.crt
            X509v3 Subject Alternative Name: 
                DNS:unsloth.ai
            X509v3 Certificate Policies: 
                Policy: 2.23.140.1.2.1
            X509v3 CRL Distribution Points: 
                Full Name:
                  URI:http://c.pki.goog/we1/T58q3x0jyXI.crl
            CT Precertificate SCTs: 
                Signed Certificate Timestamp:
                    Version   : v1 (0x0)
                    Log ID    : 64:11:C4:6C:A4:12:EC:A7:89:1C:A2:02:2E:00:BC:AB:
                                4F:28:07:D4:1E:35:27:AB:EA:FE:D5:03:C9:7D:CD:F0
                    Timestamp : Mar  3 20:40:18.125 2026 GMT
                    Extensions: none
                    Signature : ecdsa-with-SHA256
                                30:45:02:20:30:A0:F5:76:95:DF:D1:DA:F0:8D:B4:D8:
                                79:D4:21:1F:2F:C0:81:98:A6:C1:C7:9B:89:AA:10:6D:
                                A2:43:69:22:02:21:00:84:6C:75:0D:0C:D3:7D:4E:2E:
                                39:AE:FC:B1:1A:22:25:55:F3:20:8B:44:A6:54:3D:84:
                                B4:D4:53:1D:CE:7A:2A
                Signed Certificate Timestamp:
                    Version   : v1 (0x0)
                    Log ID    : 0E:57:94:BC:F3:AE:A9:3E:33:1B:2C:99:07:B3:F7:90:
                                DF:9B:C2:3D:71:32:25:DD:21:A9:25:AC:61:C5:4E:21
                    Timestamp : Mar  3 20:40:18.139 2026 GMT
                    Extensions: none
                    Signature : ecdsa-with-SHA256
                                30:45:02:21:00:B8:BA:6C:E4:39:AE:1E:8C:35:30:4B:
                                33:0A:4A:20:DC:32:3B:5D:71:2A:8A:1B:99:A3:2B:59:
                                69:F2:51:36:79:02:20:12:D7:1F:4D:8A:A5:DF:44:58:
                                64:5D:1D:85:0E:3D:18:0C:11:DA:C2:F7:57:87:40:77:
                                09:2E:6F:CF:4E:47:FC
    Signature Algorithm: ecdsa-with-SHA256
    Signature Value:
        30:45:02:20:03:a1:d2:e9:58:bc:6a:2d:4f:d0:67:5d:28:ef:
        19:0c:77:bd:ca:1b:22:2c:81:61:26:65:3c:d0:a0:5a:13:c0:
        02:21:00:b9:50:5c:8f:89:5f:bc:56:49:07:d1:93:fa:d4:4a:
        5a:a3:89:55:28:78:8b:85:dd:5a:ec:49:73:f0:94:89:60
RAW protocols output
=== ssl2 ===
s_client: Unknown option: -ssl2
s_client: Use -help for summary.

=== ssl3 ===
s_client: Unknown option: -ssl3
s_client: Use -help for summary.

=== tls1 ===
40F7DA5C237B0000:error:0A0000BF:SSL routines:tls_setup_handshake:no protocols available:../ssl/statem/statem_lib.c:104:
CONNECTED(00000003)
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 0 bytes and written 7 bytes
Verification: OK
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
Early data was not sent
Verify return code: 0 (ok)
---

=== tls1_1 ===
40C7F73ABE720000:error:0A0000BF:SSL routines:tls_setup_handshake:no protocols available:../ssl/statem/statem_lib.c:104:
CONNECTED(00000003)
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 0 bytes and written 7 bytes
Verification: OK
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
Early data was not sent
Verify return code: 0 (ok)
---

=== tls1_2 ===
depth=2 C = US, O = Google Trust Services LLC, CN = GTS Root R4
verify return:1
depth=1 C = US, O = Google Trust Services, CN = WE1
verify return:1
depth=0 CN = unsloth.ai
verify return:1
CONNECTED(00000003)
---
Certificate chain
 0 s:CN = unsloth.ai
   i:C = US, O = Google Trust Services, CN = WE1
   a:PKEY: id-ecPublicKey, 256 (bit); sigalg: ecdsa-with-SHA256
   v:NotBefore: Mar  3 19:40:17 2026 GMT; NotAfter: Jun  1 20:40:09 2026 GMT
 1 s:C = US, O = Google Trust Services, CN = WE1
   i:C = US, O = Google Trust Services LLC, CN = GTS Root R4
   a:PKEY: id-ecPublicKey, 256 (bit); sigalg: ecdsa-with-SHA384
   v:NotBefore: Dec 13 09:00:00 2023 GMT; NotAfter: Feb 20 14:00:00 2029 GMT
 2 s:C = US, O = Google Trust Services LLC, CN = GTS Root R4
   i:C = BE, O = GlobalSign nv-sa, OU = Root CA, CN = GlobalSign Root CA
   a:PKEY: id-ecPublicKey, 384 (bit); sigalg: RSA-SHA256
   v:NotBefore: Nov 15 03:43:21 2023 GMT; NotAfter: Jan 28 00:00:42 2028 GMT
---
Server certificate
-----BEGIN CERTIFICATE-----
MIIDkzCCAzmgAwIBAgIQSP3STNc6AooOdkHFF+hstjAKBggqhkjOPQQDAjA7MQsw
CQYDVQQGEwJVUzEeMBwGA1UEChMVR29vZ2xlIFRydXN0IFNlcnZpY2VzMQwwCgYD
VQQDEwNXRTEwHhcNMjYwMzAzMTk0MDE3WhcNMjYwNjAxMjA0MDA5WjAVMRMwEQYD
VQQDEwp1bnNsb3RoLmFpMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEYpJLiE8N
N3ixrlGL9XmZDe0PwFphm2CYEQlM96o3KryxtaBmU8yGDmlKOk3TlBnaGddpPk2r
LEc7jxIyHO5JL6OCAkMwggI/MA4GA1UdDwEB/wQEAwIHgDATBgNVHSUEDDAKBggr
BgEFBQcDATAMBgNVHRMBAf8EAjAAMB0GA1UdDgQWBBTJ/RDZy2uN3XMkHK2VDTMu
gR7gCTAfBgNVHSMEGDAWgBSQd5I1Z8T/qMyp5nvZgHl7zJP5ODBeBggrBgEFBQcB
AQRSMFAwJwYIKwYBBQUHMAGGG2h0dHA6Ly9vLnBraS5nb29nL3Mvd2UxL1NQMDAl
BggrBgEFBQcwAoYZaHR0cDovL2kucGtpLmdvb2cvd2UxLmNydDAVBgNVHREEDjAM
ggp1bnNsb3RoLmFpMBMGA1UdIAQMMAowCAYGZ4EMAQIBMDYGA1UdHwQvMC0wK6Ap
oCeGJWh0dHA6Ly9jLnBraS5nb29nL3dlMS9UNThxM3gwanlYSS5jcmwwggEEBgor
BgEEAdZ5AgQCBIH1BIHyAPAAdgBkEcRspBLsp4kcogIuALyrTygH1B41J6vq/tUD
yX3N8AAAAZy1bhvNAAAEAwBHMEUCIDCg9XaV39Ha8I202HnUIR8vwIGYpsHHm4mq
EG2iQ2kiAiEAhGx1DQzTfU4uOa78sRoiJVXzIItEplQ9hLTUUx3OeioAdgAOV5S8
866pPjMbLJkHs/eQ35vCPXEyJd0hqSWsYcVOIQAAAZy1bhvbAAAEAwBHMEUCIQC4
umzkOa4ejDUwSzMKSiDcMjtdcSqKG5mjK1lp8lE2eQIgEtcfTYql30RYZF0dhQ49
GAwR2sL3V4dAdwkub89OR/wwCgYIKoZIzj0EAwIDSAAwRQIgA6HS6Vi8ai1P0Gdd
KO8ZDHe9yhsiLIFhJmU80KBaE8ACIQC5UFyPiV+8VkkH0ZP61Epao4lVKHiLhd1a
7Elz8JSJYA==
-----END CERTIFICATE-----
subject=CN = unsloth.ai
issuer=C = US, O = Google Trust Services, CN = WE1
---
No client certificate CA names sent
Peer signing digest: SHA256
Peer signature type: ECDSA
Server Temp Key: X25519, 253 bits
---
SSL handshake has read 2961 bytes and written 292 bytes
Verification: OK
---
New, TLSv1.2, Cipher is ECDHE-ECDSA-CHACHA20-POLY1305
Server public key is 256 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
    Protocol  : TLSv1.2
    Cipher    : ECDHE-ECDSA-CHACHA20-POLY1305
    Session-ID: C957309E58A2E9C08A7BE94D5D859F82A2B7055D9A0CA74DF6750CDC9AC85197
    Session-ID-ctx: 
    Master-Key: 952939F8C20A29C0A13B6285A0E158FAAC59416B736CF6F5B940D64FB579F6B00678DC35435D31BE66CBF5E3039AB940
    PSK identity: None
    PSK identity hint: None
    SRP username: None
    TLS session ticket lifetime hint: 64800 (seconds)
    TLS session ticket:
    0000 - 03 72 63 80 2b 91 59 be-82 9c 09 4d d5 bc 4d 54   .rc.+.Y....M..MT
    0010 - 14 1e f8 33 42 de b1 f0-05 8f a5 b2 55 e9 b5 2d   ...3B.......U..-
    0020 - dc 55 5d 0f c1 20 33 06-8c 01 a9 73 86 ad 4c ef   .U].. 3....s..L.
    0030 - 68 1d 68 f2 c3 fc 06 ca-dc 00 1e 01 fe c7 cf 33   h.h............3
    0040 - f8 14 9d b0 32 4b 4f 29-e6 c4 8b 90 de 3b 7e 48   ....2KO).....;~H
    0050 - a4 88 e3 c2 63 a4 64 c0-60 80 39 3c 1d 5b 7a a3   ....c.d.`.9<.[z.
    0060 - 02 a7 41 54 ab 25 86 90-67 ac 68 96 68 b2 d1 ec   ..AT.%..g.h.h...
    0070 - 2a 7c 88 8f b5 e1 a4 5b-ef 44 0e 12 77 b5 36 51   *|.....[.D..w.6Q
    0080 - 44 78 e1 30 5a 1b 98 a5-88 1f 01 37 6a 3c 53 6f   Dx.0Z......7j    0090 - e1 7c ad ed b4 b0 1e 15-10 87 e2 cc ac c2 a7 da   .|..............
    00a0 - 97 6d a4 c2 65 c5 e6 5b-8b 24 77 fa e4 77 84 e2   .m..e..[.$w..w..
    00b0 - f2 8b c6 8e 83 ff e1 41-02 5c 5c d0 1f 60 e3 59   .......A.\\..`.Y

    Start Time: 1774385357
    Timeout   : 7200 (sec)
    Verify return code: 0 (ok)
    Extended master secret: yes
---
DONE

=== tls1_3 ===
depth=2 C = US, O = Google Trust Services LLC, CN = GTS Root R4
verify return:1
depth=1 C = US, O = Google Trust Services, CN = WE1
verify return:1
depth=0 CN = unsloth.ai
verify return:1
CONNECTED(00000003)
---
Certificate chain
 0 s:CN = unsloth.ai
   i:C = US, O = Google Trust Services, CN = WE1
   a:PKEY: id-ecPublicKey, 256 (bit); sigalg: ecdsa-with-SHA256
   v:NotBefore: Mar  3 19:40:17 2026 GMT; NotAfter: Jun  1 20:40:09 2026 GMT
 1 s:C = US, O = Google Trust Services, CN = WE1
   i:C = US, O = Google Trust Services LLC, CN = GTS Root R4
   a:PKEY: id-ecPublicKey, 256 (bit); sigalg: ecdsa-with-SHA384
   v:NotBefore: Dec 13 09:00:00 2023 GMT; NotAfter: Feb 20 14:00:00 2029 GMT
 2 s:C = US, O = Google Trust Services LLC, CN = GTS Root R4
   i:C = BE, O = GlobalSign nv-sa, OU = Root CA, CN = GlobalSign Root CA
   a:PKEY: id-ecPublicKey, 384 (bit); sigalg: RSA-SHA256
   v:NotBefore: Nov 15 03:43:21 2023 GMT; NotAfter: Jan 28 00:00:42 2028 GMT
---
Server certificate
-----BEGIN CERTIFICATE-----
MIIDkzCCAzmgAwIBAgIQSP3STNc6AooOdkHFF+hstjAKBggqhkjOPQQDAjA7MQsw
CQYDVQQGEwJVUzEeMBwGA1UEChMVR29vZ2xlIFRydXN0IFNlcnZpY2VzMQwwCgYD
VQQDEwNXRTEwHhcNMjYwMzAzMTk0MDE3WhcNMjYwNjAxMjA0MDA5WjAVMRMwEQYD
VQQDEwp1bnNsb3RoLmFpMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEYpJLiE8N
N3ixrlGL9XmZDe0PwFphm2CYEQlM96o3KryxtaBmU8yGDmlKOk3TlBnaGddpPk2r
LEc7jxIyHO5JL6OCAkMwggI/MA4GA1UdDwEB/wQEAwIHgDATBgNVHSUEDDAKBggr
BgEFBQcDATAMBgNVHRMBAf8EAjAAMB0GA1UdDgQWBBTJ/RDZy2uN3XMkHK2VDTMu
gR7gCTAfBgNVHSMEGDAWgBSQd5I1Z8T/qMyp5nvZgHl7zJP5ODBeBggrBgEFBQcB
AQRSMFAwJwYIKwYBBQUHMAGGG2h0dHA6Ly9vLnBraS5nb29nL3Mvd2UxL1NQMDAl
BggrBgEFBQcwAoYZaHR0cDovL2kucGtpLmdvb2cvd2UxLmNydDAVBgNVHREEDjAM
ggp1bnNsb3RoLmFpMBMGA1UdIAQMMAowCAYGZ4EMAQIBMDYGA1UdHwQvMC0wK6Ap
oCeGJWh0dHA6Ly9jLnBraS5nb29nL3dlMS9UNThxM3gwanlYSS5jcmwwggEEBgor
BgEEAdZ5AgQCBIH1BIHyAPAAdgBkEcRspBLsp4kcogIuALyrTygH1B41J6vq/tUD
yX3N8AAAAZy1bhvNAAAEAwBHMEUCIDCg9XaV39Ha8I202HnUIR8vwIGYpsHHm4mq
EG2iQ2kiAiEAhGx1DQzTfU4uOa78sRoiJVXzIItEplQ9hLTUUx3OeioAdgAOV5S8
866pPjMbLJkHs/eQ35vCPXEyJd0hqSWsYcVOIQAAAZy1bhvbAAAEAwBHMEUCIQC4
umzkOa4ejDUwSzMKSiDcMjtdcSqKG5mjK1lp8lE2eQIgEtcfTYql30RYZF0dhQ49
GAwR2sL3V4dAdwkub89OR/wwCgYIKoZIzj0EAwIDSAAwRQIgA6HS6Vi8ai1P0Gdd
KO8ZDHe9yhsiLIFhJmU80KBaE8ACIQC5UFyPiV+8VkkH0ZP61Epao4lVKHiLhd1a
7Elz8JSJYA==
-----END CERTIFICATE-----
subject=CN = unsloth.ai
issuer=C = US, O = Google Trust Services, CN = WE1
---
No client certificate CA names sent
Peer signing digest: SHA256
Peer signature type: ECDSA
Server Temp Key: X25519, 253 bits
---
SSL handshake has read 2807 bytes and written 324 bytes
Verification: OK
---
New, TLSv1.3, Cipher is TLS_AES_256_GCM_SHA384
Server public key is 256 bit
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
Early data was not sent
Verify return code: 0 (ok)
---
DONE

Crawler stats

Basic stats
Total execution time83 s
Total URLs603
Total size431 MB
Requests - total time173 s
Requests - avg time287 ms
Requests - min time85 ms
Requests - max time3 s
Requests by status200: 575
307: 27
308: 1

Analysis stats

Found 21 row(s).
Class::methodExec time 🔽Exec count
BestPracticeAnalyzer::checkHeadingStructure4 s 575
BestPracticeAnalyzer::checkNonClickablePhoneNumbers3.6 s 575
AccessibilityAnalyzer::checkMissingLabels3.5 s 575
AccessibilityAnalyzer::checkMissingAriaLabels3.2 s 575
AccessibilityAnalyzer::checkMissingRoles2.6 s 575
AccessibilityAnalyzer::checkMissingLang2.2 s 575
BestPracticeAnalyzer::checkMaxDOMDepth2 s 575
BestPracticeAnalyzer::checkInlineSvg871 ms 575
SslTlsAnalyzer::getTLSandSSLCertificateInfo656 ms 1
BestPracticeAnalyzer::checkMissingQuotesOnAttributes406 ms 575
AccessibilityAnalyzer::checkImageAltAttributes255 ms 575
SecurityAnalyzer::checkHtmlSecurity78 ms 575
SeoAndOpenGraphAnalyzer::analyzeHeadings71 ms 1
SecurityAnalyzer::checkHeaders12 ms 575
SeoAndOpenGraphAnalyzer::analyzeSeo1 ms 1
SeoAndOpenGraphAnalyzer::analyzeOpenGraph1 ms 1
BestPracticeAnalyzer::checkTitleUniqueness0 ms 1
BestPracticeAnalyzer::checkMetaDescriptionUniqueness0 ms 1
BestPracticeAnalyzer::checkBrotliSupport0 ms 1
BestPracticeAnalyzer::checkWebpSupport0 ms 1
BestPracticeAnalyzer::checkAvifSupport0 ms 1
No rows found, please edit your search term.

Content processor stats

Found 12 row(s).
Class::methodExec time 🔽Exec count
HtmlProcessor::findUrls3.5 s 603
NextJsProcessor::applyContentChangesBeforeUrlParsing3.4 s 575
JavaScriptProcessor::findUrls2.2 s 575
CssProcessor::findUrls153 ms 575
AstroProcessor::findUrls45 ms 575
AstroProcessor::applyContentChangesBeforeUrlParsing0 ms 575
NextJsProcessor::findUrls0 ms 575
JavaScriptProcessor::applyContentChangesBeforeUrlParsing0 ms 575
HtmlProcessor::applyContentChangesBeforeUrlParsing0 ms 603
SvelteProcessor::applyContentChangesBeforeUrlParsing0 ms 575
SvelteProcessor::findUrls0 ms 575
CssProcessor::applyContentChangesBeforeUrlParsing0 ms 575
No rows found, please edit your search term.

Crawler info

Version 2.1.0.20260317
Executed At 2026-03-24 20:47:56
Command siteone-crawler --url=https://unsloth.ai/docs --markdown-export-dir=/tmp/siteone-reextract-unsloth --markdown-exclude-selector=header,footer,nav,.sidebar,.menu,.breadcrumb,script,style --timeout=30 --workers=3 --disable-javascript --disable-styles --disable-fonts --disable-images --disable-files --no-color --hide-progress-bar --output=text --include-regex=/docs/
Hostname ubuntu-8gb-hel1-1
User-Agent Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/26.0.0.0 Safari/537.36 siteone-crawler/2.1.0.20260317