Crawler Report for lmstudio.ai

Summary

Website Quality Score

7.4 Good
Performance
10.0
SEO
4.6
Security
8.5
Accessibility
5.0
Best Practices
9.2
  • ⛔ Skipped URLs - 104 skipped URLs found.
  • ⛔ Redirects - 19 redirects found.
  • ⛔ 404 CRITICAL - 18 non-existent pages found.
  • ⛔ 7 page(s) with multiple <h1> headings.
  • ⚠️ 156 page(s) do not support Brotli compression.
  • ⚠️ No WebP image found on the website.
  • ⚠️ No AVIF image found on the website.
  • ⚠️ 72 page(s) with skipped heading levels.
  • ⚠️ 156 page(s) without aria labels.
  • ⚠️ 156 page(s) without role attributes.
  • ⚠️ Security - 696 pages(s) with warning(s).
  • ⏩ Loaded robots.txt for domain 'lmstudio.ai': status code 200, size 64 B and took 112 ms.
  • ⏩ External URLs - 104 external URL(s) found.
  • ✅ SSL/TLS certificate is valid until Jun 8 11:09:59 2026 GMT. Issued by C = US, O = Google Trust Services, CN = WE1. Subject is CN = lmstudio.ai.
  • ✅ SSL/TLS certificate issued by 'C = US, O = Google Trust Services, CN = WE1'.
  • ✅ Performance OK - all non-media URLs are faster than 3 seconds.
  • ✅ HTTP headers - found 21 unique headers.
  • ✅ All 132 unique title(s) are within the allowed 10% duplicity. Highest duplicity title has 1%.
  • ✅ All 129 description(s) are within the allowed 10% duplicity. Highest duplicity description has 8%.
  • ✅ All pages have quoted attributes.
  • ✅ All pages have inline SVGs smaller than 5120 bytes.
  • ✅ All pages have inline SVGs with less than 5 duplicates.
  • ✅ All pages have valid or none inline SVGs.
  • ✅ All pages have <h1> heading.
  • ✅ All pages have DOM depth less than 30.
  • ✅ All pages have clickable (interactive) phone numbers.
  • ✅ All pages have valid HTML.
  • ✅ All pages have image alt attributes.
  • ✅ All pages have form labels.
  • ✅ All pages have lang attribute.
  • ✅ DNS IPv4 OK: domain lmstudio.ai resolved to 172.67.69.92, 104.26.7.153, 104.26.6.153 (DNS server: 127.0.0.53).
  • ✅ DNS IPv6 OK: domain lmstudio.ai resolved to 2606:4700:20::681a:799, 2606:4700:20::ac43:455c, 2606:4700:20::681a:699 (DNS server: 127.0.0.53).

Visited URLs

Found 193 row(s).
URLStatusTypeTime (s)SizeCache
/docs307 Redirect80 ms 83 BNone
/docs/app200 HTML138 ms704 kBNone
/docs/api/rest-api307 Redirect1.2 s 113 BNone
/docs/app/basics200 HTML1.2 s 693 kBNone
/docs/app/advanced/prompt-template200 HTML1.4 s 682 kBNone
/docs/app/mcp200 HTML1.1 s 706 kBNone
/docs/integrations200 HTML1.1 s 644 kBNone
/docs/app/advanced/parallel-requests200 HTML114 ms673 kBNone
/docs/app/advanced/per-model200 HTML166 ms690 kBNone
/docs/app/presets/push200 HTML523 ms674 kBNone
/docs/app/user-interface/modes200 HTML86 ms 672 kBNone
/docs/app/basics/rag200 HTML311 ms678 kBNone
/docs/app/system-requirements200 HTML161 ms680 kBNone
/docs/app/presets/publish200 HTML105 ms676 kBNone
/docs/app/presets/pull200 HTML275 ms669 kBNone
/docs/app/plugins/mcp/deeplink307 Redirect31 ms 109 BNone
/docs/app/mcp/deeplink200 HTML293 ms691 kBNone
/docs/typescript200 HTML85 ms 715 kBNone
/docs/python200 HTML89 ms 729 kBNone
/docs/app/advanced/speculative-decoding200 HTML399 ms703 kBNone
/docs/app/basics/download-model200 HTML130 ms683 kBNone
/docs/developer200 HTML109 ms755 kBNone
/docs/cli200 HTML138 ms742 kBNone
/docs/app/modelyaml200 HTML117 ms763 kBNone
/docs/lmlink200 HTML81 ms 657 kBNone
/docs/app/offline200 HTML337 ms681 kBNone
/docs/app/presets200 HTML167 ms704 kBNone
/docs/app/advanced/import-model200 HTML156 ms685 kBNone
/docs/app/plugins/mcp307 Redirect32 ms 91 BNone
/docs/app/user-interface/languages200 HTML336 ms744 kBNone
/docs/app/user-interface/themes200 HTML297 ms667 kBNone
/docs/app/modelyaml/publish200 HTML272 ms703 kBNone
/docs/api/openai-api307 Redirect54 ms 123 BNone
/docs/app/presets/import200 HTML373 ms694 kBNone
/docs/developer/core/headless200 HTML328 ms724 kBNone
/docs/developer/rest-api307 Redirect40 ms 105 BNone
/docs/app/basics/chat200 HTML278 ms692 kBNone
/docs/system-requirements307 Redirect57 ms 123 BNone
/docs/basics/download-models404 HTML56 ms 47 kB0 s
/docs/advanced/sideload307 Redirect29 ms 123 BNone
/docs/integrations/codex200 HTML94 ms 664 kBNone
/docs/integrations/claude-code200 HTML322 ms677 kBNone
/docs/basics/chat404 HTML72 ms 47 kB0 s
/docs/configuration/load404 HTML39 ms 47 kB0 s
/docs/integrations/lmlink200 HTML269 ms667 kBNone
/docs/configuration/inference404 HTML68 ms 47 kB0 s
/docs/advanced/context404 HTML42 ms 47 kB0 s
/docs/typescript/llm-prediction/chat-completion200 HTML307 ms804 kBNone
/docs/typescript/authentication200 HTML468 ms704 kBNone
/docs/typescript/tokenization200 HTML269 ms727 kBNone
/docs/typescript/model-info/get-context-length200 HTML311 ms716 kBNone
/docs/typescript/llm-prediction/working-with-chats200 HTML265 ms732 kBNone
/docs/typescript/agent/act200 HTML410 ms804 kBNone
/docs/typescript/plugins/dependencies200 HTML386 ms686 kBNone
/docs/typescript/llm-prediction/parameters200 HTML632 ms721 kBNone
/docs/typescript/embedding200 HTML437 ms691 kBNone
/docs/typescript/manage-models/list-loaded200 HTML389 ms686 kBNone
/docs/typescript/llm-prediction/cancelling-predictions200 HTML386 ms717 kBNone
/docs/typescript/llm-prediction/structured-response200 HTML435 ms759 kBNone
/docs/typescript/llm-prediction/image-input200 HTML154 ms716 kBNone
/docs/typescript/api-reference/llm-prediction-config-input200 HTML395 ms713 kBNone
/docs/typescript/manage-models/loading200 HTML337 ms756 kBNone
/docs/typescript/llm-prediction/speculative-decoding200 HTML325 ms702 kBNone
/docs/typescript/plugins200 HTML184 ms703 kBNone
/docs/typescript/agent/tools200 HTML315 ms736 kBNone
/docs/typescript/manage-models/list-downloaded200 HTML239 ms706 kBNone
/docs/cli/get307 Redirect68 ms 117 BNone
/docs/typescript/api-reference/llm-load-model-config200 HTML292 ms703 kBNone
/docs/typescript/model-info/get-model-info200 HTML282 ms689 kBNone
/docs/typescript/llm-prediction/completion200 HTML285 ms748 kBNone
/docs/typescript/project-setup200 HTML271 ms693 kBNone
/docs/python/manage-models/list-loaded200 HTML232 ms688 kBNone
/docs/python/embedding200 HTML234 ms679 kBNone
/docs/python/getting-started/authentication200 HTML263 ms697 kBNone
/docs/python/llm-prediction/chat-completion200 HTML137 ms866 kBNone
/docs/python/model-info/get-load-config200 HTML427 ms680 kBNone
/docs/python/getting-started/repl200 HTML328 ms738 kBNone
/docs/python/getting-started/project-setup200 HTML255 ms750 kBNone
/docs/python/llm-prediction/completion200 HTML113 ms813 kBNone
/docs/python/manage-models/list-downloaded200 HTML330 ms703 kBNone
/docs/python/llm-prediction/image-input200 HTML323 ms741 kBNone
/docs/python/llm-prediction/parameters200 HTML294 ms742 kBNone
/docs/python/tokenization200 HTML436 ms708 kBNone
/docs/python/llm-prediction/speculative-decoding200 HTML436 ms687 kBNone
/docs/python/agent/act200 HTML171 ms787 kBNone
/docs/python/manage-models/loading200 HTML373 ms791 kBNone
/docs/python/llm-prediction/cancelling-predictions200 HTML364 ms699 kBNone
/docs/python/model-info/get-context-length200 HTML239 ms703 kBNone
/docs/python/agent/tools200 HTML251 ms756 kBNone
/docs/python/model-info/get-model-info200 HTML289 ms694 kBNone
/docs/python/llm-prediction/working-with-chats200 HTML254 ms707 kBNone
/docs/python/agent404 HTML70 ms 47 kB0 s
/docs/python/llm-prediction/structured-response200 HTML278 ms741 kBNone
/docs/developer/core/authentication200 HTML83 ms 714 kBNone
/docs/developer/openai-compat/models200 HTML341 ms675 kBNone
/docs/developer/rest/chat200 HTML277 ms829 kBNone
/docs/developer/rest/load200 HTML101 ms729 kBNone
/docs/developer/rest/download200 HTML76 ms 699 kBNone
/docs/developer/anthropic-compat200 HTML315 ms716 kBNone
/docs/developer/rest200 HTML80 ms 711 kBNone
/docs/developer/rest/download-status200 HTML326 ms699 kBNone
/docs/developer/api-changelog200 HTML146 ms872 kBNone
/docs/developer/core/ttl-and-auto-evict200 HTML296 ms734 kBNone
/docs/developer/rest/list200 HTML98 ms 742 kBNone
/docs/developer/rest/streaming-events200 HTML390 ms1002 kBNone
/docs/developer/rest/endpoints200 HTML338 ms848 kBNone
/docs/developer/core/server200 HTML82 ms 697 kBNone
/docs/developer/openai-compat/responses200 HTML266 ms701 kBNone
/docs/developer/openai-compat200 HTML91 ms 714 kBNone
/docs/developer/rest/quickstart200 HTML298 ms838 kBNone
/docs/developer/core/headless_llmster200 HTML118 ms735 kBNone
/docs/developer/openai-compat/structured-output200 HTML89 ms 754 kBNone
/docs/developer/rest/stateful-chats200 HTML253 ms718 kBNone
/docs/developer/openai-compat/tools200 HTML173 ms1 MBNone
/docs/developer/rest/unload200 HTML268 ms686 kBNone
/docs/developer/core/lmlink200 HTML258 ms681 kBNone
/docs/developer/openai-compat/chat-completions200 HTML279 ms700 kBNone
/docs/developer/openai-compat/embeddings200 HTML293 ms686 kBNone
/docs/developer/openai-compat/completions200 HTML267 ms672 kBNone
/docs/developer/anthropic-compat/messages200 HTML348 ms704 kBNone
/docs/developer/core/mcp200 HTML253 ms896 kBNone
/docs/cli/contributing200 HTML256 ms667 kBNone
/docs/cli/local-models/ps200 HTML194 ms690 kBNone
/docs/cli/serve/server-stop200 HTML352 ms671 kBNone
/docs/cli/develop-and-publish/push200 HTML309 ms695 kBNone
/docs/cli/develop-and-publish/dev200 HTML276 ms686 kBNone
/docs/cli/local-models/import200 HTML276 ms701 kBNone
/docs/cli/serve/log-stream200 HTML110 ms702 kBNone
/docs/cli/link/link-status200 HTML291 ms691 kBNone
/docs/cli/runtime404 HTML61 ms 47 kB0 s
/docs/cli/daemon/daemon-status200 HTML275 ms697 kBNone
/docs/cli/develop-and-publish/login200 HTML243 ms689 kBNone
/docs/cli/daemon/daemon-up200 HTML144 ms693 kBNone
/docs/cli/link/link-enable200 HTML232 ms685 kBNone
/docs/cli/local-models/chat200 HTML283 ms708 kBNone
/docs/cli/daemon/daemon-update200 HTML248 ms691 kBNone
/docs/cli/local-models/load200 HTML133 ms757 kBNone
/docs/cli/link/link-disable200 HTML309 ms678 kBNone
/docs/cli/link/link-set-preferred-device200 HTML252 ms684 kBNone
/docs/cli/serve/server-start200 HTML249 ms693 kBNone
/docs/cli/local-models/get200 HTML123 ms704 kBNone
/docs/cli/runtime/runtime200 HTML258 ms690 kBNone
/docs/cli/serve/server-status200 HTML231 ms707 kBNone
/docs/cli/local-models/ls200 HTML125 ms711 kBNone
/docs/cli/develop-and-publish/clone200 HTML275 ms684 kBNone
/docs/cli/daemon/daemon-down200 HTML288 ms676 kBNone
/docs/lmlink/basics/preferred-device200 HTML145 ms654 kBNone
/docs/cli/link/link-set-device-name200 HTML406 ms679 kBNone
/docs/lmlink/basics/add-device200 HTML337 ms673 kBNone
/docs/lmlink/basics200 HTML164 ms645 kBNone
/docs/app/advanced/lm-runtimes404 HTML78 ms 47 kB0 s
/docs/configuration/per-model307 Redirect51 ms 121 BNone
/docs/lmlink/basics/faq200 HTML249 ms657 kBNone
/docs/modes307 Redirect29 ms 125 BNone
/docs/app/modelyaml/308 Redirect29 ms 103 BNone
/docs/app/basics/import-model307 Redirect31 ms 127 BNone
/docs/typescript/llm-prediction/(./structured-responses)404 HTML43 ms 47 kB0 s
/docs/cli/ps307 Redirect32 ms 115 BNone
/docs/advanced/speculative-decoding404 HTML46 ms 47 kB0 s
/docs/api/ttl-and-auto-evict307 Redirect30 ms 143 BNone
/docs/typescript/plugins/publish-plugins200 HTML219 ms693 kBNone
/docs/typescript/plugins/tools-provider200 HTML218 ms690 kBNone
/docs/typescript/plugins/prompt-preprocessor200 HTML216 ms687 kBNone
/docs/cli/ls307 Redirect49 ms 115 BNone
/docs/typescript/plugins/generator200 HTML266 ms686 kBNone
/docs/api/sdk/get-context-length404 HTML47 ms 47 kB0 s
/docs/typescript/plugins/custom-configuration200 HTML270 ms701 kBNone
/docs/api/sdk/load-model404 HTML39 ms 47 kB0 s
/docs/advanced/per-model404 HTML40 ms 47 kB0 s
/docs/python/llm-prediction/(./structured-responses)404 HTML41 ms 47 kB0 s
/docs/app/api/ttl-and-auto-evict307 Redirect28 ms 143 BNone
/docs/developer/core/tools307 Redirect29 ms 135 BNone
/docs/developer/openai-api307 Redirect29 ms 123 BNone
/docs/developer/core/server/settings200 HTML284 ms691 kBNone
/docs/developer/core/server/serve-on-network200 HTML247 ms675 kBNone
/docs/typescript/plugins/tools-provider/multiple-tools200 HTML275 ms721 kBNone
/docs/typescript/plugins/tools-provider/status-reports-and-warnings200 HTML337 ms751 kBNone
/docs/typescript/plugins/tools-provider/handling-aborts200 HTML262 ms712 kBNone
/docs/typescript/plugins/tools-provider/custom-configuration200 HTML233 ms733 kBNone
/docs/typescript/plugins/tools-provider/single-tool200 HTML399 ms731 kBNone
/docs/typescript/plugins/prompt-preprocessor/handling-aborts200 HTML338 ms675 kBNone
/docs/typescript/plugins/prompt-preprocessor/custom-configuration200 HTML295 ms717 kBNone
/docs/typescript/plugins/prompt-preprocessor/examples200 HTML277 ms711 kBNone
/docs/typescript/plugins/prompt-preprocessor/custom-status-report200 HTML225 ms696 kBNone
/docs/typescript/plugins/generator/tool-calling-generators200 HTML271 ms695 kBNone
/docs/typescript/plugins/generator/text-only-generators200 HTML228 ms705 kBNone
/docs/typescript/plugins/custom-configuration/defining-new-fields200 HTML268 ms766 kBNone
/docs/typescript/plugins/custom-configuration/accessing-config200 HTML277 ms696 kBNone
/docs/typescript/plugins/plugins/configurations404 HTML86 ms 47 kB0 s
/docs/typescript/plugins/agent/tools404 HTML49 ms 47 kB0 s
/docs/typescript/plugins/custom-configuration/config-ts200 HTML300 ms716 kBNone
/docs/typescript/plugins/prompt-preprocessor/configurations404 HTML79 ms 47 kB0 s
/docs/typescript/plugins/generator/configurations404 HTML45 ms 47 kB0 s
No rows found, please edit your search term.

Best practices

Found 10 row(s).
Analysis nameOKNoticeWarningCritical
Invalid inline SVGs30000
Heading structure2390817
Large inline SVGs (> 5120 B)30000
Duplicate inline SVGs (> 5 and > 1024 B)30000
DOM depth (> 30)174000
Title uniqueness (> 10%)132000
Description uniqueness (> 10%)129000
Brotli support001560
WebP support0010
AVIF support0010
No rows found, please edit your search term.

Large inline SVGs

No problems found.


Duplicate inline SVGs

No problems found.


Invalid inline SVGs

No problems found.


Missing quotes on attributes

No problems found.


DOM depth

No problems found.


Heading structure

SeverityOccursDetailAffected URLs (max 5)
critical9Multiple <h1> headings found.URL 1, URL 2, URL 3, URL 4, URL 5
warning48Heading structure is skipping levels: found an <h3> after an <h1>.URL 1, URL 2, URL 3, URL 4, URL 5
warning27Heading structure is skipping levels: found an <h6> after an <h3>.URL 1, URL 2, URL 3, URL 4, URL 5
warning12Heading structure is skipping levels: found an <h4> after an <h2>.URL 1, URL 2, URL 3, URL 4, URL 5
warning10Heading structure is skipping levels: found an <h5> after an <h1>.URL 1, URL 2, URL 3, URL 4, URL 5
warning2Heading structure is skipping levels: found an <h6> after an <h1>.URL 1, URL 2
warning1Heading structure is skipping levels: found an <h5> after an <h3>./docs/app/basics
warning1Heading structure is skipping levels: found an <h6> after an <h4>./docs/app/advanced/prompt-template
warning1Heading structure is skipping levels: found an <h6> after an <h2>./docs/app/basics/rag

Non-clickable phone numbers

No problems found.


Title uniqueness

No problems found.


Description uniqueness

No problems found.

Accessibility

Analysis nameOKNoticeWarningCritical
Missing image alt attributes60000
Missing aria labels1910912
Missing html lang attribute1000
Missing roles0010

Valid HTML

No problems found.


Missing image alt attributes

No problems found.


Missing form labels

No problems found.


Missing aria labels

Found 200 row(s).
SeverityOccursDetailAffected URLs (max 5)
critical183<select ***>URL 1, URL 2, URL 3, URL 4, URL 5
critical1<textarea class="border-* placeholder:text-* flex border p-* focus:border-* focus-* focus-* focus-* focus-* focus-* disabled:cursor-* disabled:opacity-* min-* w-* rounded-* bg-* font-* text-* md:text-*" *** >/docs/app/mcp/deeplink
warning936<a class="no-* flex items-* justify-* gap-* text-* text-* font-* opacity-* text-* hover:text-* border relative top-* w-* px-* py-* hover:bg-* rounded-* border-* border-* border-* border-* bg-* dark:bg-*" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning936<a class="no-* flex w-* items-* justify-* gap-* rounded-* px-* py-* text-* text-* font-* opacity-* text-* hover:text-* hover:bg-* border border-*" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning748<a class="whitespace-* rounded p-* px-* opacity-* transition-* duration-* ease-*" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning654<a class="" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning624<button class="border inline-* items-* justify-* whitespace-* text-* font-* ring-* focus-* focus-* focus-* focus-* disabled:pointer-* disabled:opacity-* transition-* duration-* ease-* border-* bg-* hover:bg-* hover:text-* rounded-* px-* h-* w-* !p-*">URL 1, URL 2, URL 3, URL 4, URL 5
warning624<a class="whitespace-* p-* opacity-* transition-* duration-* ease-* flex w-* items-* bg-* justify-* rounded-* py-* gap-* px-*" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning624<a class="whitespace-* p-* opacity-* transition-* duration-* ease-* flex items-* bg-* justify-* rounded-* py-* gap-* px-* w-*" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning183<button class="border-* placeholder:text-* flex items-* justify-* gap-* rounded-* border bg-* px-* py-* text-* focus:outline-* disabled:cursor-* disabled:opacity-* [& *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning156<button class="border inline-* items-* justify-* whitespace-* rounded-* font-* ring-* focus-* focus-* focus-* focus-* disabled:pointer-* disabled:opacity-* transition-* duration-* ease-* text-* hover:text-* border-* h-* w-* text-*">URL 1, URL 2, URL 3, URL 4, URL 5
warning156<a class="flex w-* items-* justify-* rounded-* border border-* py-* px-* font-* hover:border-* transition-*" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning156<button class="inline-* items-* justify-* whitespace-* rounded-* font-* ring-* focus-* focus-* focus-* focus-* disabled:pointer-* disabled:opacity-* transition-* duration-* ease-* text-* hover:bg-* border pointer-* z-* h-* w-* flex-* bg-* p-* text-*">URL 1, URL 2, URL 3, URL 4, URL 5
warning156<a class="whitespace-* rounded p-* px-* transition-* duration-* ease-* opacity-* hover:opacity-*" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning156<a class="whitespace-* p-* opacity-* duration-* ease-* bg-* flex w-* items-* justify-* rounded-* border border-* py-* px-* font-* hover:border-* transition-*" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning156<a class="whitespace-* p-* opacity-* duration-* ease-* flex w-* rounded-* border border-* py-* px-* font-* hover:border-* transition-* flex-* items-* justify-* gap-*" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning156<a class="no-* flex items-* justify-* gap-* text-* text-* font-* opacity-* hover:text-* w-* hover:bg-* text-* border border-* relative py-* px-* top-* rounded-* border-* dark:bg-* border-* border-* border-* border-* bg-*" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning156<a class="whitespace-* p-* opacity-* duration-* ease-* w-* justify-* rounded-* border border-* py-* px-* font-* hover:border-* transition-* flex flex-* items-* gap-*" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning156<button class="border inline-* items-* justify-* whitespace-* text-* font-* ring-* focus-* focus-* focus-* focus-* disabled:pointer-* disabled:opacity-* transition-* duration-* ease-* bg-* text-* hover:bg-* border-* active:bg-* h-* rounded-* px-*">URL 1, URL 2, URL 3, URL 4, URL 5
warning156<a class="opacity-* text-* min-* p-* no-* hover:opacity-*" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning156<button class="inline-* whitespace-* text-* font-* ring-* focus-* focus-* focus-* focus-* disabled:pointer-* disabled:opacity-* transition-* duration-* ease-* hover:bg-* hover:text-* rounded-* px-* h-* w-* min-* items-* justify-* gap-* border border-* opacity-* hover:opacity-* cursor-*">URL 1, URL 2, URL 3, URL 4, URL 5
warning156<a class="whitespace-* rounded p-* px-* opacity-* transition-* duration-* ease-* !bg-* hover:text-* hover:dark:text-*" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning156<a class="whitespace-* p-* opacity-* transition-* duration-* ease-* border-* bg-* text-* hover:bg-* hover:text-* mx-* rounded-* px-* py-*" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning156<a class="no-* flex w-* items-* justify-* gap-* rounded-* px-* py-* text-* text-* font-* opacity-* hover:text-* hover:bg-* text-* border border-* dark:bg-* bg-*" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning156<a class="whitespace-* rounded p-* px-* opacity-* transition-* duration-* ease-* bg-* py-*" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning156<a class="whitespace-* p-* opacity-* duration-* ease-* flex w-* items-* justify-* rounded-* border border-* py-* px-* font-* hover:border-* transition-*" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning114<button class="border inline-* items-* justify-* whitespace-* text-* font-* ring-* focus-* focus-* focus-* focus-* disabled:pointer-* disabled:opacity-* transition-* duration-* ease-* text-* hover:text-* border-* border-* border-* border-* rounded-* h-* px-* py-*">URL 1, URL 2, URL 3, URL 4, URL 5
warning84<a class="rounded p-* px-* flex text-* text-* hover:text-* hover:border-* hover:bg-* opacity-* hover:opacity-* transition-* duration-* ease-*" id="typescript/llm-prediction/structured-response" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning84<a class="rounded p-* px-* flex text-* text-* hover:text-* hover:border-* hover:bg-* opacity-* hover:opacity-* transition-* duration-* ease-*" id="typescript/plugins/dependencies" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning84<a class="rounded p-* px-* flex text-* text-* hover:text-* hover:border-* hover:bg-* opacity-* hover:opacity-* transition-* duration-* ease-*" id="typescript" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning84<a class="rounded p-* px-* flex text-* text-* hover:text-* hover:border-* hover:bg-* opacity-* hover:opacity-* transition-* duration-* ease-*" id="typescript/api-reference/llm-prediction-config-input" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning84<a class="rounded p-* px-* flex text-* text-* hover:text-* hover:border-* hover:bg-* opacity-* hover:opacity-* transition-* duration-* ease-*" id="typescript/model-info/get-model-info" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning84<a class="rounded p-* px-* flex text-* text-* hover:text-* hover:border-* hover:bg-* opacity-* hover:opacity-* transition-* duration-* ease-*" id="typescript/embedding" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning84<a class="rounded p-* px-* flex text-* text-* hover:text-* hover:border-* hover:bg-* opacity-* hover:opacity-* transition-* duration-* ease-*" id="typescript/model-info/get-context-length" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning84<a class="rounded p-* px-* flex text-* text-* hover:text-* hover:border-* hover:bg-* opacity-* hover:opacity-* transition-* duration-* ease-*" id="typescript/llm-prediction/working-with-chats" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning84<a class="rounded p-* px-* flex text-* text-* hover:text-* hover:border-* hover:bg-* opacity-* hover:opacity-* transition-* duration-* ease-*" id="typescript/llm-prediction/image-input" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning84<a class="rounded p-* px-* flex text-* text-* hover:text-* hover:border-* hover:bg-* opacity-* hover:opacity-* transition-* duration-* ease-*" id="typescript/api-reference/llm-load-model-config" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning84<a class="rounded p-* px-* flex text-* text-* hover:text-* hover:border-* hover:bg-* opacity-* hover:opacity-* transition-* duration-* ease-*" id="typescript/project-setup" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning84<a class="rounded p-* px-* flex text-* text-* hover:text-* hover:border-* hover:bg-* opacity-* hover:opacity-* transition-* duration-* ease-*" id="typescript/llm-prediction/cancelling-predictions" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning84<a class="rounded p-* px-* flex text-* text-* hover:text-* hover:border-* hover:bg-* opacity-* hover:opacity-* transition-* duration-* ease-*" id="typescript/manage-models/list-downloaded" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning84<a class="rounded p-* px-* flex text-* text-* hover:text-* hover:border-* hover:bg-* opacity-* hover:opacity-* transition-* duration-* ease-*" id="typescript/agent/tools" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning84<a class="rounded p-* px-* flex text-* text-* hover:text-* hover:border-* hover:bg-* opacity-* hover:opacity-* transition-* duration-* ease-*" id="typescript/llm-prediction/chat-completion" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning84<a class="rounded p-* px-* flex text-* text-* hover:text-* hover:border-* hover:bg-* opacity-* hover:opacity-* transition-* duration-* ease-*" id="typescript/plugins" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning84<a class="rounded p-* px-* flex text-* text-* hover:text-* hover:border-* hover:bg-* opacity-* hover:opacity-* transition-* duration-* ease-*" id="typescript/manage-models/list-loaded" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning84<a class="rounded p-* px-* flex text-* text-* hover:text-* hover:border-* hover:bg-* opacity-* hover:opacity-* transition-* duration-* ease-*" id="typescript/authentication" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning84<a class="rounded p-* px-* flex text-* text-* hover:text-* hover:border-* hover:bg-* opacity-* hover:opacity-* transition-* duration-* ease-*" id="typescript/llm-prediction/completion" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning84<a class="rounded p-* px-* flex text-* text-* hover:text-* hover:border-* hover:bg-* opacity-* hover:opacity-* transition-* duration-* ease-*" id="typescript/agent/act" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning84<a class="rounded p-* px-* flex text-* text-* hover:text-* hover:border-* hover:bg-* opacity-* hover:opacity-* transition-* duration-* ease-*" id="typescript/manage-models/loading" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning84<a class="rounded p-* px-* flex text-* text-* hover:text-* hover:border-* hover:bg-* opacity-* hover:opacity-* transition-* duration-* ease-*" id="typescript/llm-prediction/parameters" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning84<a class="rounded p-* px-* flex text-* text-* hover:text-* hover:border-* hover:bg-* opacity-* hover:opacity-* transition-* duration-* ease-*" id="typescript/llm-prediction/speculative-decoding" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning84<a class="rounded p-* px-* flex text-* text-* hover:text-* hover:border-* hover:bg-* opacity-* hover:opacity-* transition-* duration-* ease-*" id="typescript/tokenization" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning69<button class="border inline-* items-* justify-* whitespace-* text-* font-* ring-* focus-* focus-* focus-* focus-* disabled:pointer-* disabled:opacity-* transition-* duration-* ease-* text-* bg-* border-* border-* border-* border-* rounded-* h-* px-* py-* !bg-* dark:!bg-*">URL 1, URL 2, URL 3, URL 4, URL 5
warning62<a class="rounded p-* px-* flex text-* text-* hover:text-* hover:border-* hover:bg-* opacity-* hover:opacity-* transition-* duration-* ease-*" id="developer" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning62<a class="rounded p-* px-* flex text-* text-* hover:text-* hover:border-* hover:bg-* opacity-* hover:opacity-* transition-* duration-* ease-*" id="developer/rest/streaming-events" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning62<a class="rounded p-* px-* flex text-* text-* hover:text-* hover:border-* hover:bg-* opacity-* hover:opacity-* transition-* duration-* ease-*" id="developer/openai-compat/embeddings" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning62<a class="rounded p-* px-* flex text-* text-* hover:text-* hover:border-* hover:bg-* opacity-* hover:opacity-* transition-* duration-* ease-*" id="developer/openai-compat/chat-completions" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning62<a class="rounded p-* px-* flex text-* text-* hover:text-* hover:border-* hover:bg-* opacity-* hover:opacity-* transition-* duration-* ease-*" id="developer/rest/quickstart" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning62<a class="rounded p-* px-* flex text-* text-* hover:text-* hover:border-* hover:bg-* opacity-* hover:opacity-* transition-* duration-* ease-*" id="developer/openai-compat/completions" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning62<a class="rounded p-* px-* flex text-* text-* hover:text-* hover:border-* hover:bg-* opacity-* hover:opacity-* transition-* duration-* ease-*" id="developer/anthropic-compat" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning62<a class="rounded p-* px-* flex text-* text-* hover:text-* hover:border-* hover:bg-* opacity-* hover:opacity-* transition-* duration-* ease-*" id="developer/core/headless_llmster" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning62<a class="rounded p-* px-* flex text-* text-* hover:text-* hover:border-* hover:bg-* opacity-* hover:opacity-* transition-* duration-* ease-*" id="developer/openai-compat/structured-output" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning62<a class="rounded p-* px-* flex text-* text-* hover:text-* hover:border-* hover:bg-* opacity-* hover:opacity-* transition-* duration-* ease-*" id="developer/rest" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning62<a class="rounded p-* px-* flex text-* text-* hover:text-* hover:border-* hover:bg-* opacity-* hover:opacity-* transition-* duration-* ease-*" id="developer/openai-compat/models" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning62<a class="rounded p-* px-* flex text-* text-* hover:text-* hover:border-* hover:bg-* opacity-* hover:opacity-* transition-* duration-* ease-*" id="developer/rest/chat" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning62<a class="rounded p-* px-* flex text-* text-* hover:text-* hover:border-* hover:bg-* opacity-* hover:opacity-* transition-* duration-* ease-*" id="developer/rest/load" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning62<a class="rounded p-* px-* flex text-* text-* hover:text-* hover:border-* hover:bg-* opacity-* hover:opacity-* transition-* duration-* ease-*" id="developer/rest/download-status" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning62<a class="rounded p-* px-* flex text-* text-* hover:text-* hover:border-* hover:bg-* opacity-* hover:opacity-* transition-* duration-* ease-*" id="developer/core/headless" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning62<a class="rounded p-* px-* flex text-* text-* hover:text-* hover:border-* hover:bg-* opacity-* hover:opacity-* transition-* duration-* ease-*" id="developer/anthropic-compat/messages" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning62<a class="rounded p-* px-* flex text-* text-* hover:text-* hover:border-* hover:bg-* opacity-* hover:opacity-* transition-* duration-* ease-*" id="developer/core/authentication" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning62<a class="rounded p-* px-* flex text-* text-* hover:text-* hover:border-* hover:bg-* opacity-* hover:opacity-* transition-* duration-* ease-*" id="developer/rest/download" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning62<a class="rounded p-* px-* flex text-* text-* hover:text-* hover:border-* hover:bg-* opacity-* hover:opacity-* transition-* duration-* ease-*" id="developer/openai-compat" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning62<a class="rounded p-* px-* flex text-* text-* hover:text-* hover:border-* hover:bg-* opacity-* hover:opacity-* transition-* duration-* ease-*" id="developer/rest/unload" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning62<a class="rounded p-* px-* flex text-* text-* hover:text-* hover:border-* hover:bg-* opacity-* hover:opacity-* transition-* duration-* ease-*" id="developer/rest/endpoints" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning62<a class="rounded p-* px-* flex text-* text-* hover:text-* hover:border-* hover:bg-* opacity-* hover:opacity-* transition-* duration-* ease-*" id="developer/rest/list" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning62<a class="rounded p-* px-* flex text-* text-* hover:text-* hover:border-* hover:bg-* opacity-* hover:opacity-* transition-* duration-* ease-*" id="developer/core/mcp" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning62<a class="rounded p-* px-* flex text-* text-* hover:text-* hover:border-* hover:bg-* opacity-* hover:opacity-* transition-* duration-* ease-*" id="developer/rest/stateful-chats" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning62<a class="rounded p-* px-* flex text-* text-* hover:text-* hover:border-* hover:bg-* opacity-* hover:opacity-* transition-* duration-* ease-*" id="developer/api-changelog" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning62<a class="rounded p-* px-* flex text-* text-* hover:text-* hover:border-* hover:bg-* opacity-* hover:opacity-* transition-* duration-* ease-*" id="developer/openai-compat/responses" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning62<a class="rounded p-* px-* flex text-* text-* hover:text-* hover:border-* hover:bg-* opacity-* hover:opacity-* transition-* duration-* ease-*" id="developer/core/lmlink" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning62<a class="rounded p-* px-* flex text-* text-* hover:text-* hover:border-* hover:bg-* opacity-* hover:opacity-* transition-* duration-* ease-*" id="developer/core/ttl-and-auto-evict" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning62<a class="rounded p-* px-* flex text-* text-* hover:text-* hover:border-* hover:bg-* opacity-* hover:opacity-* transition-* duration-* ease-*" id="developer/openai-compat/tools" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning50<a class="rounded p-* px-* flex text-* text-* hover:text-* hover:border-* hover:bg-* opacity-* hover:opacity-* transition-* duration-* ease-*" id="cli/daemon/daemon-up" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning50<a class="rounded p-* px-* flex text-* text-* hover:text-* hover:border-* hover:bg-* opacity-* hover:opacity-* transition-* duration-* ease-*" id="cli/daemon/daemon-status" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning50<a class="rounded p-* px-* flex text-* text-* hover:text-* hover:border-* hover:bg-* opacity-* hover:opacity-* transition-* duration-* ease-*" id="cli/local-models/ls" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning50<a class="rounded p-* px-* flex text-* text-* hover:text-* hover:border-* hover:bg-* opacity-* hover:opacity-* transition-* duration-* ease-*" id="cli/serve/server-start" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning50<a class="rounded p-* px-* flex text-* text-* hover:text-* hover:border-* hover:bg-* opacity-* hover:opacity-* transition-* duration-* ease-*" id="cli/link/link-disable" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning50<a class="rounded p-* px-* flex text-* text-* hover:text-* hover:border-* hover:bg-* opacity-* hover:opacity-* transition-* duration-* ease-*" id="cli/local-models/import" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning50<a class="rounded p-* px-* flex text-* text-* hover:text-* hover:border-* hover:bg-* opacity-* hover:opacity-* transition-* duration-* ease-*" id="cli/develop-and-publish/login" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning50<a class="rounded p-* px-* flex text-* text-* hover:text-* hover:border-* hover:bg-* opacity-* hover:opacity-* transition-* duration-* ease-*" id="cli/develop-and-publish/push" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning50<a class="rounded p-* px-* flex text-* text-* hover:text-* hover:border-* hover:bg-* opacity-* hover:opacity-* transition-* duration-* ease-*" id="cli/runtime/runtime" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning50<a class="rounded p-* px-* flex text-* text-* hover:text-* hover:border-* hover:bg-* opacity-* hover:opacity-* transition-* duration-* ease-*" id="cli/local-models/ps" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning50<a class="rounded p-* px-* flex text-* text-* hover:text-* hover:border-* hover:bg-* opacity-* hover:opacity-* transition-* duration-* ease-*" id="cli" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning50<a class="rounded p-* px-* flex text-* text-* hover:text-* hover:border-* hover:bg-* opacity-* hover:opacity-* transition-* duration-* ease-*" id="cli/serve/server-stop" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning50<a class="rounded p-* px-* flex text-* text-* hover:text-* hover:border-* hover:bg-* opacity-* hover:opacity-* transition-* duration-* ease-*" id="cli/serve/server-status" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning50<a class="rounded p-* px-* flex text-* text-* hover:text-* hover:border-* hover:bg-* opacity-* hover:opacity-* transition-* duration-* ease-*" id="cli/contributing" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning50<a class="rounded p-* px-* flex text-* text-* hover:text-* hover:border-* hover:bg-* opacity-* hover:opacity-* transition-* duration-* ease-*" id="cli/serve/log-stream" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning50<a class="rounded p-* px-* flex text-* text-* hover:text-* hover:border-* hover:bg-* opacity-* hover:opacity-* transition-* duration-* ease-*" id="cli/local-models/get" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning50<a class="rounded p-* px-* flex text-* text-* hover:text-* hover:border-* hover:bg-* opacity-* hover:opacity-* transition-* duration-* ease-*" id="cli/link/link-set-device-name" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning50<a class="rounded p-* px-* flex text-* text-* hover:text-* hover:border-* hover:bg-* opacity-* hover:opacity-* transition-* duration-* ease-*" id="cli/link/link-set-preferred-device" *** >URL 1, URL 2, URL 3, URL 4, URL 5
warning50<a class="rounded p-* px-* flex text-* text-* hover:text-* hover:border-* hover:bg-* opacity-* hover:opacity-* transition-* duration-* ease-*" id="cli/develop-and-publish/dev" *** >URL 1, URL 2, URL 3, URL 4, URL 5
You have reached the limit of 100 rows as a protection against very large output or exhausted memory.
No rows found, please edit your search term.

Missing roles

SeverityOccursDetailAffected URLs (max 5)
warning156<nav class="flex w-* flex-* items-* justify-* px-* fixed z-* text-* bg-* flex-* pt-*" id="fixed-header-***" *** >URL 1, URL 2, URL 3, URL 4, URL 5

Missing html lang attribute

No problems found.

Security

Found 10 row(s).
HeaderOKNoticeWarningCriticalRecommendation
Strict-Transport-Security001740Strict-Transport-Security header is set to max-age=259200 which is less than 31 days. This can be a security risk.
Feature-Policy001740Feature-Policy header is not set. It allows enabling/disabling browser APIs and features for security. Not important if Permissions-Policy is set.
Permissions-Policy001740Permissions-Policy header is not set. It allows enabling/disabling browser APIs and features for security.
X-Powered-By001740X-Powered-By header is set to 'Next.js'. It is better not to reveal used technologies.
Server017400Server header is set to 'cloudflare'. It is better not to reveal used technologies.
X-Frame-Options174000
X-XSS-Protection174000
X-Content-Type-Options174000
Referrer-Policy174000
Content-Security-Policy174000
No rows found, please edit your search term.

Security headers

SeverityOccursDetailAffected URLs (max 5)
warning174Feature-Policy header is not set. It allows enabling/disabling browser APIs and features for security. Not important if Permissions-Policy is set.URL 1, URL 2, URL 3, URL 4, URL 5
warning174Permissions-Policy header is not set. It allows enabling/disabling browser APIs and features for security.URL 1, URL 2, URL 3, URL 4, URL 5
warning174Strict-Transport-Security header is set to max-age=259200 which is less than 31 days. This can be a security risk.URL 1, URL 2, URL 3, URL 4, URL 5
warning174X-Powered-By header is set to 'Next.js'. It is better not to reveal used technologies.URL 1, URL 2, URL 3, URL 4, URL 5
notice174Server header is set to 'cloudflare'. It is better not to reveal used technologies.URL 1, URL 2, URL 3, URL 4, URL 5

TOP non-unique titles

Found 10 row(s).
Count 🔽Title
3Chat Completions | LM Studio Docs
3Speculative Decoding | LM Studio Docs
3Introduction | LM Studio Docs
3Authentication | LM Studio Docs
2List Loaded Models | LM Studio Docs
2Cancelling Predictions | LM Studio Docs
2Configuring the Model | LM Studio Docs
2Manage Models in Memory | LM Studio Docs
2Handling Aborts | LM Studio Docs
2Project Setup | LM Studio Docs
No rows found, please edit your search term.

TOP non-unique descriptions

Found 10 row(s).
Count 🔽Description
13
3Using API Tokens in LM Studio
2Query which models are currently loaded
2APIs for representing a chat conversation with an LLM
2APIs to load, access, and unload models from memory
2Generate text embeddings from input text
2Tokenize text using a model&#x27;s tokenizer
2APIs to list the available models in a given local environment
2API for passing images as input to the model
2Provide a string input for the model to complete
No rows found, please edit your search term.

SEO metadata

Found 156 row(s).
URL 🔼IndexingTitleH1DescriptionKeywords
/docs/appAllowedWelcome to LM Studio Docs! | LM Studio DocsWelcome to LM Studio Docs!Learn how to run Llama, DeepSeek, Qwen, Phi, and other LLMs locally with LM Studio.local ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/app/advanced/import-modelAllowedImport Models | LM Studio DocsImport ModelsUse model files you've downloaded outside of LM Studiolocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/app/advanced/parallel-requestsAllowedParallel Requests | LM Studio DocsParallel RequestsEnable parallel requests via continuous batchinglocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/app/advanced/per-modelAllowedPer-model Defaults | LM Studio DocsPer-model DefaultsYou can set default settings for each model in LM Studiolocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/app/advanced/prompt-templateAllowedPrompt Template | LM Studio DocsPrompt TemplateOptionally set or modify the model's prompt templatelocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/app/advanced/speculative-decodingAllowedSpeculative Decoding | LM Studio DocsSpeculative DecodingSpeed up generation with a draft modellocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/app/basicsAllowedGet started with LM Studio | LM Studio DocsGet started with LM StudioDownload and run Large Language Models like Qwen, Mistral, Gemma, or gpt-oss in LM Studio.local ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/app/basics/chatAllowedManage chats | LM Studio DocsManage chatsManage conversation threads with LLMslocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/app/basics/download-modelAllowedDownload an LLM | LM Studio DocsDownload an LLMDiscover and download supported LLMs in LM Studiolocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/app/basics/ragAllowedChat with Documents | LM Studio DocsChat with DocumentsHow to provide local documents to an LLM as additional contextlocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/app/mcpAllowedUse MCP Servers | LM Studio DocsUse MCP ServersConnect MCP servers to LM Studiolocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/app/mcp/deeplinkAllowedAdd to LM Studio Button | LM Studio DocsAdd to LM Studio ButtonAdd MCP servers to LM Studio using a deeplinklocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/app/modelyamlAllowedIntroduction to model.yaml | LM Studio DocsIntroduction to model.yamlDescribe models with the cross-platform model.yaml specification.local ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/app/modelyaml/publishAllowedPublish a model.yaml | LM Studio DocsPublish a model.yamlUpload your model definition to the LM Studio Hub.local ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/app/offlineAllowedOffline Operation | LM Studio DocsOffline OperationLM Studio can operate entirely offline, just make sure to get some model files first.local ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/app/presetsAllowedConfig Presets | LM Studio DocsConfig PresetsSave your system prompts and other parameters as Presets for easy reuse across chats.local ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/app/presets/importAllowedImporting and Sharing | LM Studio DocsImporting and SharingYou can import preset files directly from disk, or pull presets made by others via URL.local ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/app/presets/publishAllowedPublish Your Presets | LM Studio DocsPublish Your PresetsPublish your Presets to the LM Studio Hub. Share your Presets with the community or with your colleagues.local ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/app/presets/pullAllowedPull Updates | LM Studio DocsPull UpdatesHow to pull the latest revisions of your Presets, or presets you have imported from others.local ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/app/presets/pushAllowedPush New Revisions | LM Studio DocsPush New RevisionsPublish new revisions of your Presets to the LM Studio Hub.local ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/app/system-requirementsAllowedSystem Requirements | LM Studio DocsSystem RequirementsSupported CPU, GPU types for LM Studio on Mac (M1/M2/M3/M4), Windows (x64/ARM), and Linux (x64/ARM64)local ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/app/user-interface/languagesAllowedLM Studio in your language | LM Studio DocsLM Studio in your languageLM Studio is available in English, Chinese, Spanish, French, German, Korean, Russian, and 26+ more languages.local ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/app/user-interface/modesAllowedUser or Developer | LM Studio DocsUser or DeveloperShow more advanced settings and developer featureslocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/app/user-interface/themesAllowedColor Themes | LM Studio DocsColor ThemesCustomize LM Studio's color themelocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/cliAllowedlms — LM Studio's CLI | LM Studio Docslms — LM Studio's CLIGet starting with the lms command line utility.local ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/cli/contributingAllowedContributing | LM Studio DocsContributingLearn where to file issues and how to contribute to the lms CLI.local ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/cli/daemon/daemon-downAllowedlms daemon down | LM Studio Docslms daemon downStop llmster from the CLI.local ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/cli/daemon/daemon-statusAllowedlms daemon status | LM Studio Docslms daemon statusCheck whether llmster is running.local ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/cli/daemon/daemon-upAllowedlms daemon up | LM Studio Docslms daemon upStart llmster from the CLI.local ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/cli/daemon/daemon-updateAllowedlms daemon update | LM Studio Docslms daemon updateUpdate llmster to the latest version.local ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/cli/develop-and-publish/cloneAllowedlms clone | LM Studio Docslms cloneClone an artifact from LM Studio Hub to a local folder (beta).local ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/cli/develop-and-publish/devAllowedlms dev (Beta) | LM Studio Docslms dev (Beta)Start a plugin dev server or install a local plugin (beta).local ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/cli/develop-and-publish/loginAllowedlms login | LM Studio Docslms loginAuthenticate with LM Studio Hub (beta).local ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/cli/develop-and-publish/pushAllowedlms push (Beta) | LM Studio Docslms push (Beta)Upload the current folder's artifact to LM Studio Hub (beta).local ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/cli/link/link-disableAllowedlms link disable | LM Studio Docslms link disableDisable LM Link on this device from the CLI.local ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/cli/link/link-enableAllowedlms link enable | LM Studio Docslms link enableEnable LM Link on this device from the CLI.local ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/cli/link/link-set-device-nameAllowedlms link set-device-name | LM Studio Docslms link set-device-nameRename this device on LM Link from the CLI.local ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/cli/link/link-set-preferred-deviceAllowedlms link set-preferred-device | LM Studio Docslms link set-preferred-deviceSet the preferred device for model resolution on LM Link.local ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/cli/link/link-statusAllowedlms link status | LM Studio Docslms link statusCheck LM Link connection status and see connected peers.local ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/cli/local-models/chatAllowedlms chat | LM Studio Docslms chatStart a chat session with a local model from the command line.local ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/cli/local-models/getAllowedlms get | LM Studio Docslms getSearch and download models from the command line.local ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/cli/local-models/importAllowedlms import | LM Studio Docslms importImport a local model file into your LM Studio models directory.local ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/cli/local-models/loadAllowedlms load | LM Studio Docslms loadLoad or unload models, set context length, GPU offload, TTL, or estimate memory usage without loading.local ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/cli/local-models/lsAllowedlms ls | LM Studio Docslms lsList all downloaded models in your LM Studio installation.local ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/cli/local-models/psAllowedlms ps | LM Studio Docslms psShow information about currently loaded models from the command line.local ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/cli/runtime/runtimeAllowedlms runtime | LM Studio Docslms runtimeManage LM Studio inference runtimes from the CLI.local ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/cli/serve/log-streamAllowedlms log stream | LM Studio Docslms log streamStream logs from LM Studio. Useful for debugging prompts sent to the model.local ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/cli/serve/server-startAllowedlms server start | LM Studio Docslms server startStart the LM Studio local server with customizable port and logging options.local ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/cli/serve/server-statusAllowedlms server status | LM Studio Docslms server statusCheck the status of your running LM Studio server instance.local ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/cli/serve/server-stopAllowedlms server stop | LM Studio Docslms server stopStop the running LM Studio server instance.local ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/developerAllowedLM Studio Developer Docs | LM Studio DocsLM Studio Developer DocsBuild with LM Studio's local APIs and SDKs — TypeScript, Python, REST, and OpenAI and Anthropic-compatible endpoints.local ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/developer/anthropic-compatAllowedAnthropic Compatibility Endpoints | LM Studio DocsAnthropic Compatibility EndpointsSend Messages requests using the Anthropic-compatible API.local ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/developer/anthropic-compat/messagesAllowedMessages | LM Studio DocsMessagesSend a Messages request and get the assistant's response.local ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/developer/api-changelogAllowedAPI Changelog | LM Studio DocsAPI ChangelogUpdates and changes to the LM Studio API.local ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/developer/core/authenticationAllowedAuthentication | LM Studio DocsAuthenticationUsing API Tokens in LM Studiolocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/developer/core/headlessAllowedRun LM Studio as a service (headless) | LM Studio DocsRun LM Studio as a service (headless)GUI-less operation of LM Studio: run in the background, start on machine login, and load models on demandlocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/developer/core/headless_llmsterAllowedSetup llmster as a Startup Task on Linux | LM Studio DocsSetup llmster as a Startup Task on LinuxConfigure llmster to run on startup using systemctl on Linuxlocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/developer/core/lmlinkAllowedUsing LM Link | LM Studio DocsUsing LM LinkUse a remote device's model via the REST API with LM Linklocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/developer/core/mcpAllowedUsing MCP via API | LM Studio DocsUsing MCP via APILearn how to use Model Context Protocol (MCP) servers with LM Studio API.local ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/developer/core/serverAllowedLM Studio as a Local LLM API Server | LM Studio DocsLM Studio as a Local LLM API ServerRun an LLM API server on localhost with LM Studiolocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/developer/core/server/serve-on-networkAllowedServe on Local Network | LM Studio DocsServe on Local NetworkAllow other devices in your network use this LM Studio API serverlocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/developer/core/server/settingsAllowedServer Settings | LM Studio DocsServer SettingsConfigure server settings for LM Studio API Serverlocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/developer/core/ttl-and-auto-evictAllowedIdle TTL and Auto-Evict | LM Studio DocsIdle TTL and Auto-EvictOptionally auto-unload idle models after a certain amount of time (TTL)local ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/developer/openai-compatAllowedOpenAI Compatibility Endpoints | LM Studio DocsOpenAI Compatibility EndpointsSend requests to Responses, Chat Completions (text and images), Completions, and Embeddings endpoints.local ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/developer/openai-compat/chat-completionsAllowedChat Completions | LM Studio DocsChat CompletionsSend a chat history and get the assistant's response.local ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/developer/openai-compat/completionsAllowedCompletions (Legacy) | LM Studio DocsCompletions (Legacy)Text completion for base models (legacy OpenAI endpoint).local ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/developer/openai-compat/embeddingsAllowedEmbeddings | LM Studio DocsEmbeddingsGenerate embedding vectors from input text.local ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/developer/openai-compat/modelsAllowedList Models | LM Studio DocsList ModelsList available models via the OpenAI-compatible endpoint.local ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/developer/openai-compat/responsesAllowedResponses | LM Studio DocsResponsesCreate responses with support for streaming, reasoning, prior response state, and optional Remote MCP tools.local ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/developer/openai-compat/structured-outputAllowedStructured Output | LM Studio DocsStructured OutputEnforce LLM response formats using JSON schemas.local ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/developer/openai-compat/toolsAllowedTool Use | LM Studio DocsTool UseEnable LLMs to interact with external functions and APIs.local ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/developer/restAllowedLM Studio API | LM Studio DocsLM Studio APILM Studio's REST API for local inference and model managementlocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/developer/rest/chatAllowedChat with a model | LM Studio DocsChat with a modelSend a message to a model and receive a response. Supports MCP integration.local ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/developer/rest/downloadAllowedDownload a model | LM Studio DocsDownload a modelDownload LLMs and embedding modelslocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/developer/rest/download-statusAllowedGet download status | LM Studio DocsGet download statusGet the status of model downloadslocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/developer/rest/endpointsAllowedREST API v0 | LM Studio DocsREST API v0The REST API includes enhanced stats such as Token / Second and Time To First Token (TTFT), as well as rich information about models such as loaded vs unloaded, max context, quantization, and more.local ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/developer/rest/listAllowedList your models | LM Studio DocsList your modelsGet a list of available models on your system, including both LLMs and embedding models.local ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/developer/rest/loadAllowedLoad a model | LM Studio DocsLoad a modelLoad an LLM or Embedding model into memory with custom configuration for inferencelocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/developer/rest/quickstartAllowedGet up and running with the LM Studio API | LM Studio DocsGet up and running with the LM Studio APIDownload a model and start a simple Chat session using the REST APIlocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/developer/rest/stateful-chatsAllowedStateful Chats | LM Studio DocsStateful ChatsLearn how to maintain conversation context across multiple requestslocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/developer/rest/streaming-eventsAllowedStreaming events | LM Studio DocsStreaming eventsWhen you chat with a model with stream set to true, the response is sent as a stream of events using Server-Sent Events (SSE).local ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/developer/rest/unloadAllowedUnload a model | LM Studio DocsUnload a modelUnload a loaded model from memorylocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/integrationsAllowedIntegrations | LM Studio DocsIntegrationsUse LM Studio with external tools and apps.local ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/integrations/claude-codeAllowedClaude Code | LM Studio DocsClaude CodeUse Claude Code with LM Studiolocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/integrations/codexAllowedCodex | LM Studio DocsCodexUse Codex with LM Studiolocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/integrations/lmlinkAllowedUsing LM Link with Integrations | LM Studio DocsUsing LM Link with IntegrationsUse a remote device's model with coding tools like Claude Code and Codex via LM Linklocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/lmlinkAllowedLM Link | LM Studio DocsLM LinkUse LM Link to access your local models wherever you are, over a secure and end-to-end encrypted connection.local ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/lmlink/basicsAllowedSetup a link | LM Studio DocsSetup a linkProvision your first linklocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/lmlink/basics/add-deviceAllowedAdd a Device | LM Studio DocsAdd a DeviceConnect a new device to your LM Link.local ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/lmlink/basics/faqAllowedFrequently Asked Questions | LM Studio DocsFrequently Asked QuestionsAnswers to common questions about LM Link.local ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/lmlink/basics/preferred-deviceAllowedSet a preferred device | LM Studio DocsSet a preferred deviceChoose a preferred device to load models via LM Linklocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/pythonAllowedlmstudio-python (Python SDK) | LM Studio Docslmstudio-python (Python SDK)Getting started with LM Studio's Python SDKlocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/python/agent/actAllowedThe .act() call | LM Studio DocsThe .act() callHow to use the .act() call to turn LLMs into autonomous agents that can perform tasks on your local machine.local ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/python/agent/toolsAllowedTool Definition | LM Studio DocsTool DefinitionDefine tools to be called by the LLM, and pass them to the model in the act() call.local ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/python/embeddingAllowedEmbedding | LM Studio DocsEmbeddingGenerate text embeddings from input textlocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/python/getting-started/authenticationAllowedAuthentication | LM Studio DocsAuthenticationUsing API Tokens in LM Studiolocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/python/getting-started/project-setupAllowedProject Setup | LM Studio DocsProject SetupSet up your lmstudio-python app or script.local ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/python/getting-started/replAllowedUsing lmstudio-python in REPL | LM Studio DocsUsing lmstudio-python in REPLYou can use lmstudio-python in REPL (Read-Eval-Print Loop) to interact with LLMs, manage models, and more.local ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/python/llm-prediction/cancelling-predictionsAllowedCancelling Predictions | LM Studio DocsCancelling PredictionsStop an ongoing prediction in lmstudio-pythonlocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/python/llm-prediction/chat-completionAllowedChat Completions | LM Studio DocsChat CompletionsAPIs for a multi-turn chat conversations with an LLMlocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/python/llm-prediction/completionAllowedText Completions | LM Studio DocsText CompletionsProvide a string input for the model to completelocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/python/llm-prediction/image-inputAllowedImage Input | LM Studio DocsImage InputAPI for passing images as input to the modellocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/python/llm-prediction/parametersAllowedConfiguring the Model | LM Studio DocsConfiguring the ModelAPIs for setting inference-time and load-time parameters for your modellocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/python/llm-prediction/speculative-decodingAllowedSpeculative Decoding | LM Studio DocsSpeculative DecodingAPI to use a draft model in speculative decoding in lmstudio-pythonlocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/python/llm-prediction/structured-responseAllowedStructured Response | LM Studio DocsStructured ResponseEnforce a structured response from the model using Pydantic models or JSON Schemalocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/python/llm-prediction/working-with-chatsAllowedWorking with Chats | LM Studio DocsWorking with ChatsAPIs for representing a chat conversation with an LLMlocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/python/manage-models/list-downloadedAllowedList Downloaded Models | LM Studio DocsList Downloaded ModelsAPIs to list the available models in a given local environmentlocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/python/manage-models/list-loadedAllowedList Loaded Models | LM Studio DocsList Loaded ModelsQuery which models are currently loadedlocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/python/manage-models/loadingAllowedManage Models in Memory | LM Studio DocsManage Models in MemoryAPIs to load, access, and unload models from memorylocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/python/model-info/get-context-lengthAllowedGet Context Length | LM Studio DocsGet Context LengthAPI to get the maximum context length of a model.local ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/python/model-info/get-load-configAllowedGet Load Config | LM Studio DocsGet Load ConfigGet the load configuration of the modellocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/python/model-info/get-model-infoAllowedGet Model Info | LM Studio DocsGet Model InfoGet information about the modellocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/python/tokenizationAllowedTokenization | LM Studio DocsTokenizationTokenize text using a model's tokenizerlocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/typescriptAllowedlmstudio-js (TypeScript SDK) | LM Studio Docslmstudio-js (TypeScript SDK)Getting started with LM Studio's Typescript / JavaScript SDKlocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/typescript/agent/actAllowedThe .act() call | LM Studio DocsThe .act() callHow to use the .act() call to turn LLMs into autonomous agents that can perform tasks on your local machine.local ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/typescript/agent/toolsAllowedTool Definition | LM Studio DocsTool DefinitionDefine tools with the tool() function and pass them to the model in the act() call.local ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/typescript/api-reference/llm-load-model-configAllowedLLMLoadModelConfig | LM Studio DocsLLMLoadModelConfigAPI Reference for LLMLoadModelConfiglocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/typescript/api-reference/llm-prediction-config-inputAllowedLLMPredictionConfigInput | LM Studio DocsLLMPredictionConfigInputlocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/typescript/authenticationAllowedAuthentication | LM Studio DocsAuthenticationUsing API Tokens in LM Studiolocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/typescript/embeddingAllowedEmbedding | LM Studio DocsEmbeddingGenerate text embeddings from input textlocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/typescript/llm-prediction/cancelling-predictionsAllowedCancelling Predictions | LM Studio DocsCancelling PredictionsStop an ongoing prediction in lmstudio-jslocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/typescript/llm-prediction/chat-completionAllowedChat Completions | LM Studio DocsChat CompletionsAPIs for a multi-turn chat conversations with an LLMlocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/typescript/llm-prediction/completionAllowedGenerate Completions | LM Studio DocsGenerate CompletionsProvide a string input for the model to completelocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/typescript/llm-prediction/image-inputAllowedImage Input | LM Studio DocsImage InputAPI for passing images as input to the modellocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/typescript/llm-prediction/parametersAllowedConfiguring the Model | LM Studio DocsConfiguring the ModelAPIs for setting inference-time and load-time parameters for your modellocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/typescript/llm-prediction/speculative-decodingAllowedSpeculative Decoding | LM Studio DocsSpeculative DecodingAPI to use a draft model in speculative decoding in lmstudio-jslocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/typescript/llm-prediction/structured-responseAllowedStructured Response | LM Studio DocsStructured ResponseEnforce a structured response from the model using Pydantic (Python), Zod (TypeScript), or JSON Schemalocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/typescript/llm-prediction/working-with-chatsAllowedWorking with Chats | LM Studio DocsWorking with ChatsAPIs for representing a chat conversation with an LLMlocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/typescript/manage-models/list-downloadedAllowedList Local Models | LM Studio DocsList Local ModelsAPIs to list the available models in a given local environmentlocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/typescript/manage-models/list-loadedAllowedList Loaded Models | LM Studio DocsList Loaded ModelsQuery which models are currently loadedlocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/typescript/manage-models/loadingAllowedManage Models in Memory | LM Studio DocsManage Models in MemoryAPIs to load, access, and unload models from memorylocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/typescript/model-info/get-context-lengthAllowedGet Context Length | LM Studio DocsGet Context LengthAPI to get the maximum context length of a model.local ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/typescript/model-info/get-model-infoAllowedGet Model Info | LM Studio DocsGet Model InfoGet information about the modellocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/typescript/pluginsAllowedIntroduction to Plugins | LM Studio DocsIntroduction to PluginsA brief introduction to making plugins for LM Studio using TypeScript.local ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/typescript/plugins/custom-configurationAllowedIntroduction | LM Studio DocsIntroductionAdd custom configurations to LM Studio plugins using TypeScriptlocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/typescript/plugins/custom-configuration/accessing-configAllowedAccessing Configuration | LM Studio DocsAccessing Configurationlocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/typescript/plugins/custom-configuration/config-tsAllowedconfig.ts File | LM Studio Docsconfig.ts Filelocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/typescript/plugins/custom-configuration/defining-new-fieldsAllowedDefining New Fields | LM Studio DocsDefining New Fieldslocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/typescript/plugins/dependenciesAllowedUsing npm Dependencies | LM Studio DocsUsing npm DependenciesHow to use npm packages in LM Studio pluginslocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/typescript/plugins/generatorAllowedIntroduction | LM Studio DocsIntroductionWriting generators for LM Studio plugins using TypeScriptlocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/typescript/plugins/generator/text-only-generatorsAllowedText-only Generators | LM Studio DocsText-only Generatorslocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/typescript/plugins/generator/tool-calling-generatorsAllowedTool calling generators | LM Studio DocsTool calling generatorslocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/typescript/plugins/prompt-preprocessorAllowedIntroduction | LM Studio DocsIntroductionWriting prompt preprocessors for LM Studio plugins using TypeScriptlocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/typescript/plugins/prompt-preprocessor/custom-configurationAllowedCustom Configuration | LM Studio DocsCustom Configurationlocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/typescript/plugins/prompt-preprocessor/custom-status-reportAllowedCustom Status Report | LM Studio DocsCustom Status Reportlocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/typescript/plugins/prompt-preprocessor/examplesAllowedExamples | LM Studio DocsExampleslocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/typescript/plugins/prompt-preprocessor/handling-abortsAllowedHandling Aborts | LM Studio DocsHandling Abortslocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/typescript/plugins/publish-pluginsAllowedSharing Plugins | LM Studio DocsSharing PluginsHow to publish your LM Studio plugins so they can be used by otherslocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/typescript/plugins/tools-providerAllowedIntroduction to Tools Provider | LM Studio DocsIntroduction to Tools ProviderWriting tools providers for LM Studio plugins using TypeScriptlocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/typescript/plugins/tools-provider/custom-configurationAllowedCustom Configuration | LM Studio DocsCustom ConfigurationAdd custom configuration options to your tools providerlocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/typescript/plugins/tools-provider/handling-abortsAllowedHandling Aborts | LM Studio DocsHandling AbortsGracefully handle user-aborted tool executions in your tools providerlocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/typescript/plugins/tools-provider/multiple-toolsAllowedMultiple Tools | LM Studio DocsMultiple Toolslocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/typescript/plugins/tools-provider/single-toolAllowedSingle Tool | LM Studio DocsSingle Toollocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/typescript/plugins/tools-provider/status-reports-and-warningsAllowedStatus Reports & Warnings | LM Studio DocsStatus Reports & Warningslocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/typescript/project-setupAllowedProject Setup | LM Studio DocsProject SetupSet up your lmstudio-js app or script.local ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
/docs/typescript/tokenizationAllowedTokenization | LM Studio DocsTokenizationTokenize text using a model's tokenizerlocal ai,local llm,gpt-oss,on-device ai,run local ai,LM Studio,Llama,Gemma,Qwen,DeepSeek,llama.cpp,mlx
No rows found, please edit your search term.

OpenGraph metadata

Found 156 row(s).
URL 🔼OG TitleOG DescriptionOG ImageTwitter TitleTwitter DescriptionTwitter Image
/docs/appWelcome to LM Studio Docs! | LM Studio DocsLearn how to run Llama, DeepSeek, Qwen, Phi, and other LLMs locally with LM Studio./api/og?title=Welcome%20to%20LM%20Studio%20Docs!&from=docs/app&desc…%20locally%20with%20LM%20Studio.Welcome to LM Studio Docs! | LM Studio DocsLearn how to run Llama, DeepSeek, Qwen, Phi, and other LLMs locally with LM Studio./api/og?title=Welcome%20to%20LM%20Studio%20Docs!&description=Learn%…ith%20LM%20Studio.&from=docs/app
/docs/app/advanced/import-modelImport Models | LM Studio DocsUse model files you've downloaded outside of LM Studio/api/og?title=Import%20Models&from=docs/app/advanced/import-model&d…ded%20outside%20of%20LM%20StudioImport Models | LM Studio DocsUse model files you've downloaded outside of LM Studio/api/og?title=Import%20Models&description=Use%20model%20files%20you…m=docs/app/advanced/import-model
/docs/app/advanced/parallel-requestsParallel Requests | LM Studio DocsEnable parallel requests via continuous batching/api/og?title=Parallel%20Requests&from=docs/app/advanced/parallel-r…ts%20via%20continuous%20batchingParallel Requests | LM Studio DocsEnable parallel requests via continuous batching/api/og?title=Parallel%20Requests&description=Enable%20parallel%20r…s/app/advanced/parallel-requests
/docs/app/advanced/per-modelPer-model Defaults | LM Studio DocsYou can set default settings for each model in LM Studio/api/og?title=Per-model%20Defaults&from=docs/app/advanced/per-model…0each%20model%20in%20LM%20StudioPer-model Defaults | LM Studio DocsYou can set default settings for each model in LM Studio/api/og?title=Per-model%20Defaults&description=You%20can%20set%20de…from=docs/app/advanced/per-model
/docs/app/advanced/prompt-templatePrompt Template | LM Studio DocsOptionally set or modify the model's prompt template/api/og?title=Prompt%20Template&from=docs/app/advanced/prompt-templ…%20model%27s%20prompt%20templatePrompt Template | LM Studio DocsOptionally set or modify the model's prompt template/api/og?title=Prompt%20Template&description=Optionally%20set%20or%2…ocs/app/advanced/prompt-template
/docs/app/advanced/speculative-decodingSpeculative Decoding | LM Studio DocsSpeed up generation with a draft model/api/og?title=Speculative%20Decoding&from=docs/app/advanced/specula…ation%20with%20a%20draft%20modelSpeculative Decoding | LM Studio DocsSpeed up generation with a draft model/api/og?title=Speculative%20Decoding&description=Speed%20up%20gener…pp/advanced/speculative-decoding
/docs/app/basicsGet started with LM Studio | LM Studio DocsDownload and run Large Language Models like Qwen, Mistral, Gemma, or gpt-oss in LM Studio./api/og?title=Get%20started%20with%20LM%20Studio&from=docs/app/basi…or%20gpt-oss%20in%20LM%20Studio.Get started with LM Studio | LM Studio DocsDownload and run Large Language Models like Qwen, Mistral, Gemma, or gpt-oss in LM Studio./api/og?title=Get%20started%20with%20LM%20Studio&description=Downlo…M%20Studio.&from=docs/app/basics
/docs/app/basics/chatManage chats | LM Studio DocsManage conversation threads with LLMs/api/og?title=Manage%20chats&from=docs/app/basics/chat&description=…ersation%20threads%20with%20LLMsManage chats | LM Studio DocsManage conversation threads with LLMs/api/og?title=Manage%20chats&description=Manage%20conversation%20th…20LLMs&from=docs/app/basics/chat
/docs/app/basics/download-modelDownload an LLM | LM Studio DocsDiscover and download supported LLMs in LM Studio/api/og?title=Download%20an%20LLM&from=docs/app/basics/download-mod…ported%20LLMs%20in%20LM%20StudioDownload an LLM | LM Studio DocsDiscover and download supported LLMs in LM Studio/api/og?title=Download%20an%20LLM&description=Discover%20and%20down…m=docs/app/basics/download-model
/docs/app/basics/ragChat with Documents | LM Studio DocsHow to provide local documents to an LLM as additional context/api/og?title=Chat%20with%20Documents&from=docs/app/basics/rag&desc…0LLM%20as%20additional%20contextChat with Documents | LM Studio DocsHow to provide local documents to an LLM as additional context/api/og?title=Chat%20with%20Documents&description=How%20to%20provid…context&from=docs/app/basics/rag
/docs/app/mcpUse MCP Servers | LM Studio DocsConnect MCP servers to LM Studio/api/og?title=Use%20MCP%20Servers&from=docs/app/mcp&description=Con…MCP%20servers%20to%20LM%20StudioUse MCP Servers | LM Studio DocsConnect MCP servers to LM Studio/api/og?title=Use%20MCP%20Servers&description=Connect%20MCP%20serve…%20LM%20Studio&from=docs/app/mcp
/docs/app/mcp/deeplinkAdd to LM Studio Button | LM Studio DocsAdd MCP servers to LM Studio using a deeplink/api/og?title=Add%20to%20LM%20Studio%20Button&from=docs/app/mcp/dee…%20Studio%20using%20a%20deeplinkAdd to LM Studio Button | LM Studio DocsAdd MCP servers to LM Studio using a deeplink/api/og?title=Add%20to%20LM%20Studio%20Button&description=Add%20MCP…plink&from=docs/app/mcp/deeplink
/docs/app/modelyamlIntroduction to model.yaml | LM Studio DocsDescribe models with the cross-platform model.yaml specification.https://files.lmstudio.ai/modelyaml-card.jpgIntroduction to model.yaml | LM Studio DocsDescribe models with the cross-platform model.yaml specification.https://files.lmstudio.ai/modelyaml-card.jpg
/docs/app/modelyaml/publishPublish a model.yaml | LM Studio DocsUpload your model definition to the LM Studio Hub./api/og?title=Publish%20a%20model.yaml&from=docs/app/modelyaml/publ…%20to%20the%20LM%20Studio%20Hub.Publish a model.yaml | LM Studio DocsUpload your model definition to the LM Studio Hub./api/og?title=Publish%20a%20model.yaml&description=Upload%20your%20…&from=docs/app/modelyaml/publish
/docs/app/offlineOffline Operation | LM Studio DocsLM Studio can operate entirely offline, just make sure to get some model files first./api/og?title=Offline%20Operation&from=docs/app/offline&description…%20some%20model%20files%20first.Offline Operation | LM Studio DocsLM Studio can operate entirely offline, just make sure to get some model files first./api/og?title=Offline%20Operation&description=LM%20Studio%20can%20o…s%20first.&from=docs/app/offline
/docs/app/presetsConfig Presets | LM Studio DocsSave your system prompts and other parameters as Presets for easy reuse across chats./api/og?title=Config%20Presets&from=docs/app/presets&description=Sa…20easy%20reuse%20across%20chats.Config Presets | LM Studio DocsSave your system prompts and other parameters as Presets for easy reuse across chats./api/og?title=Config%20Presets&description=Save%20your%20system%20p…s%20chats.&from=docs/app/presets
/docs/app/presets/importImporting and Sharing | LM Studio DocsYou can import preset files directly from disk, or pull presets made by others via URL./api/og?title=Importing%20and%20Sharing&from=docs/app/presets/impor…0made%20by%20others%20via%20URL.Importing and Sharing | LM Studio DocsYou can import preset files directly from disk, or pull presets made by others via URL./api/og?title=Importing%20and%20Sharing&description=You%20can%20imp…RL.&from=docs/app/presets/import
/docs/app/presets/publishPublish Your Presets | LM Studio DocsPublish your Presets to the LM Studio Hub. Share your Presets with the community or with your colleagues./api/og?title=Publish%20Your%20Presets&from=docs/app/presets/publis…20or%20with%20your%20colleagues.Publish Your Presets | LM Studio DocsPublish your Presets to the LM Studio Hub. Share your Presets with the community or with your colleagues./api/og?title=Publish%20Your%20Presets&description=Publish%20your%2…s.&from=docs/app/presets/publish
/docs/app/presets/pullPull Updates | LM Studio DocsHow to pull the latest revisions of your Presets, or presets you have imported from others./api/og?title=Pull%20Updates&from=docs/app/presets/pull&description…have%20imported%20from%20others.Pull Updates | LM Studio DocsHow to pull the latest revisions of your Presets, or presets you have imported from others./api/og?title=Pull%20Updates&description=How%20to%20pull%20the%20la…hers.&from=docs/app/presets/pull
/docs/app/presets/pushPush New Revisions | LM Studio DocsPublish new revisions of your Presets to the LM Studio Hub./api/og?title=Push%20New%20Revisions&from=docs/app/presets/push&des…%20to%20the%20LM%20Studio%20Hub.Push New Revisions | LM Studio DocsPublish new revisions of your Presets to the LM Studio Hub./api/og?title=Push%20New%20Revisions&description=Publish%20new%20re…0Hub.&from=docs/app/presets/push
/docs/app/system-requirementsSystem Requirements | LM Studio DocsSupported CPU, GPU types for LM Studio on Mac (M1/M2/M3/M4), Windows (x64/ARM), and Linux (x64/ARM64)/api/og?title=System%20Requirements&from=docs/app/system-requiremen…RM),%20and%20Linux%20(x64/ARM64)System Requirements | LM Studio DocsSupported CPU, GPU types for LM Studio on Mac (M1/M2/M3/M4), Windows (x64/ARM), and Linux (x64/ARM64)/api/og?title=System%20Requirements&description=Supported%20CPU,%20…rom=docs/app/system-requirements
/docs/app/user-interface/languagesLM Studio in your language | LM Studio DocsLM Studio is available in English, Chinese, Spanish, French, German, Korean, Russian, and 26+ more languages./api/og?title=LM%20Studio%20in%20your%20language&from=docs/app/user…%20and%2026+%20more%20languages.LM Studio in your language | LM Studio DocsLM Studio is available in English, Chinese, Spanish, French, German, Korean, Russian, and 26+ more languages./api/og?title=LM%20Studio%20in%20your%20language&description=LM%20S…ocs/app/user-interface/languages
/docs/app/user-interface/modesUser or Developer | LM Studio DocsShow more advanced settings and developer features/api/og?title=User%20or%20Developer&from=docs/app/user-interface/mo…ngs%20and%20developer%20featuresUser or Developer | LM Studio DocsShow more advanced settings and developer features/api/og?title=User%20or%20Developer&description=Show%20more%20advan…om=docs/app/user-interface/modes
/docs/app/user-interface/themesColor Themes | LM Studio DocsCustomize LM Studio's color theme/api/og?title=Color%20Themes&from=docs/app/user-interface/themes&de…0LM%20Studio%27s%20color%20themeColor Themes | LM Studio DocsCustomize LM Studio's color theme/api/og?title=Color%20Themes&description=Customize%20LM%20Studio%27…m=docs/app/user-interface/themes
/docs/clilms — LM Studio's CLI | LM Studio DocsGet starting with the lms command line utility./api/og?title=lms%20%E2%80%94%20LM%20Studio%27s%20CLI&from=docs/cli…0lms%20command%20line%20utility.lms — LM Studio's CLI | LM Studio DocsGet starting with the lms command line utility./api/og?title=lms%20%E2%80%94%20LM%20Studio%27s%20CLI&description=G…%20line%20utility.&from=docs/cli
/docs/cli/contributingContributing | LM Studio DocsLearn where to file issues and how to contribute to the lms CLI./api/og?title=Contributing&from=docs/cli/contributing&description=L…ntribute%20to%20the%20lms%20CLI.Contributing | LM Studio DocsLearn where to file issues and how to contribute to the lms CLI./api/og?title=Contributing&description=Learn%20where%20to%20file%20…0CLI.&from=docs/cli/contributing
/docs/cli/daemon/daemon-downlms daemon down | LM Studio DocsStop llmster from the CLI./api/og?title=lms%20daemon%20down&from=docs/cli/daemon/daemon-down&…op%20llmster%20from%20the%20CLI.lms daemon down | LM Studio DocsStop llmster from the CLI./api/og?title=lms%20daemon%20down&description=Stop%20llmster%20from…from=docs/cli/daemon/daemon-down
/docs/cli/daemon/daemon-statuslms daemon status | LM Studio DocsCheck whether llmster is running./api/og?title=lms%20daemon%20status&from=docs/cli/daemon/daemon-sta…hether%20llmster%20is%20running.lms daemon status | LM Studio DocsCheck whether llmster is running./api/og?title=lms%20daemon%20status&description=Check%20whether%20l…om=docs/cli/daemon/daemon-status
/docs/cli/daemon/daemon-uplms daemon up | LM Studio DocsStart llmster from the CLI./api/og?title=lms%20daemon%20up&from=docs/cli/daemon/daemon-up&desc…rt%20llmster%20from%20the%20CLI.lms daemon up | LM Studio DocsStart llmster from the CLI./api/og?title=lms%20daemon%20up&description=Start%20llmster%20from%….&from=docs/cli/daemon/daemon-up
/docs/cli/daemon/daemon-updatelms daemon update | LM Studio DocsUpdate llmster to the latest version./api/og?title=lms%20daemon%20update&from=docs/cli/daemon/daemon-upd…r%20to%20the%20latest%20version.lms daemon update | LM Studio DocsUpdate llmster to the latest version./api/og?title=lms%20daemon%20update&description=Update%20llmster%20…om=docs/cli/daemon/daemon-update
/docs/cli/develop-and-publish/clonelms clone | LM Studio DocsClone an artifact from LM Studio Hub to a local folder (beta)./api/og?title=lms%20clone&from=docs/cli/develop-and-publish/clone&d…o%20a%20local%20folder%20(beta).lms clone | LM Studio DocsClone an artifact from LM Studio Hub to a local folder (beta)./api/og?title=lms%20clone&description=Clone%20an%20artifact%20from%…cs/cli/develop-and-publish/clone
/docs/cli/develop-and-publish/devlms dev (Beta) | LM Studio DocsStart a plugin dev server or install a local plugin (beta)./api/og?title=lms%20dev%20(Beta)&from=docs/cli/develop-and-publish/…l%20a%20local%20plugin%20(beta).lms dev (Beta) | LM Studio DocsStart a plugin dev server or install a local plugin (beta)./api/og?title=lms%20dev%20(Beta)&description=Start%20a%20plugin%20d…docs/cli/develop-and-publish/dev
/docs/cli/develop-and-publish/loginlms login | LM Studio DocsAuthenticate with LM Studio Hub (beta)./api/og?title=lms%20login&from=docs/cli/develop-and-publish/login&d…th%20LM%20Studio%20Hub%20(beta).lms login | LM Studio DocsAuthenticate with LM Studio Hub (beta)./api/og?title=lms%20login&description=Authenticate%20with%20LM%20St…cs/cli/develop-and-publish/login
/docs/cli/develop-and-publish/pushlms push (Beta) | LM Studio DocsUpload the current folder's artifact to LM Studio Hub (beta)./api/og?title=lms%20push%20(Beta)&from=docs/cli/develop-and-publish…to%20LM%20Studio%20Hub%20(beta).lms push (Beta) | LM Studio DocsUpload the current folder's artifact to LM Studio Hub (beta)./api/og?title=lms%20push%20(Beta)&description=Upload%20the%20curren…ocs/cli/develop-and-publish/push
/docs/cli/link/link-disablelms link disable | LM Studio DocsDisable LM Link on this device from the CLI./api/og?title=lms%20link%20disable&from=docs/cli/link/link-disable&…his%20device%20from%20the%20CLI.lms link disable | LM Studio DocsDisable LM Link on this device from the CLI./api/og?title=lms%20link%20disable&description=Disable%20LM%20Link%…&from=docs/cli/link/link-disable
/docs/cli/link/link-enablelms link enable | LM Studio DocsEnable LM Link on this device from the CLI./api/og?title=lms%20link%20enable&from=docs/cli/link/link-enable&de…his%20device%20from%20the%20CLI.lms link enable | LM Studio DocsEnable LM Link on this device from the CLI./api/og?title=lms%20link%20enable&description=Enable%20LM%20Link%20….&from=docs/cli/link/link-enable
/docs/cli/link/link-set-device-namelms link set-device-name | LM Studio DocsRename this device on LM Link from the CLI./api/og?title=lms%20link%20set-device-name&from=docs/cli/link/link-…%20LM%20Link%20from%20the%20CLI.lms link set-device-name | LM Studio DocsRename this device on LM Link from the CLI./api/og?title=lms%20link%20set-device-name&description=Rename%20thi…cs/cli/link/link-set-device-name
/docs/cli/link/link-set-preferred-devicelms link set-preferred-device | LM Studio DocsSet the preferred device for model resolution on LM Link./api/og?title=lms%20link%20set-preferred-device&from=docs/cli/link/…l%20resolution%20on%20LM%20Link.lms link set-preferred-device | LM Studio DocsSet the preferred device for model resolution on LM Link./api/og?title=lms%20link%20set-preferred-device&description=Set%20t…i/link/link-set-preferred-device
/docs/cli/link/link-statuslms link status | LM Studio DocsCheck LM Link connection status and see connected peers./api/og?title=lms%20link%20status&from=docs/cli/link/link-status&de…20and%20see%20connected%20peers.lms link status | LM Studio DocsCheck LM Link connection status and see connected peers./api/og?title=lms%20link%20status&description=Check%20LM%20Link%20c….&from=docs/cli/link/link-status
/docs/cli/local-models/chatlms chat | LM Studio DocsStart a chat session with a local model from the command line./api/og?title=lms%20chat&from=docs/cli/local-models/chat&descriptio…l%20from%20the%20command%20line.lms chat | LM Studio DocsStart a chat session with a local model from the command line./api/og?title=lms%20chat&description=Start%20a%20chat%20session%20w…&from=docs/cli/local-models/chat
/docs/cli/local-models/getlms get | LM Studio DocsSearch and download models from the command line./api/og?title=lms%20get&from=docs/cli/local-models/get&description=…s%20from%20the%20command%20line.lms get | LM Studio DocsSearch and download models from the command line./api/og?title=lms%20get&description=Search%20and%20download%20model….&from=docs/cli/local-models/get
/docs/cli/local-models/importlms import | LM Studio DocsImport a local model file into your LM Studio models directory./api/og?title=lms%20import&from=docs/cli/local-models/import&descri…M%20Studio%20models%20directory.lms import | LM Studio DocsImport a local model file into your LM Studio models directory./api/og?title=lms%20import&description=Import%20a%20local%20model%2…rom=docs/cli/local-models/import
/docs/cli/local-models/loadlms load | LM Studio DocsLoad or unload models, set context length, GPU offload, TTL, or estimate memory usage without loading./api/og?title=lms%20load&from=docs/cli/local-models/load&descriptio…ory%20usage%20without%20loading.lms load | LM Studio DocsLoad or unload models, set context length, GPU offload, TTL, or estimate memory usage without loading./api/og?title=lms%20load&description=Load%20or%20unload%20models,%2…&from=docs/cli/local-models/load
/docs/cli/local-models/lslms ls | LM Studio DocsList all downloaded models in your LM Studio installation./api/og?title=lms%20ls&from=docs/cli/local-models/ls&description=Li…ur%20LM%20Studio%20installation.lms ls | LM Studio DocsList all downloaded models in your LM Studio installation./api/og?title=lms%20ls&description=List%20all%20downloaded%20models…n.&from=docs/cli/local-models/ls
/docs/cli/local-models/pslms ps | LM Studio DocsShow information about currently loaded models from the command line./api/og?title=lms%20ps&from=docs/cli/local-models/ps&description=Sh…s%20from%20the%20command%20line.lms ps | LM Studio DocsShow information about currently loaded models from the command line./api/og?title=lms%20ps&description=Show%20information%20about%20cur…e.&from=docs/cli/local-models/ps
/docs/cli/runtime/runtimelms runtime | LM Studio DocsManage LM Studio inference runtimes from the CLI./api/og?title=lms%20runtime&from=docs/cli/runtime/runtime&descripti…e%20runtimes%20from%20the%20CLI.lms runtime | LM Studio DocsManage LM Studio inference runtimes from the CLI./api/og?title=lms%20runtime&description=Manage%20LM%20Studio%20infe…I.&from=docs/cli/runtime/runtime
/docs/cli/serve/log-streamlms log stream | LM Studio DocsStream logs from LM Studio. Useful for debugging prompts sent to the model./api/og?title=lms%20log%20stream&from=docs/cli/serve/log-stream&des…ompts%20sent%20to%20the%20model.lms log stream | LM Studio DocsStream logs from LM Studio. Useful for debugging prompts sent to the model./api/og?title=lms%20log%20stream&description=Stream%20logs%20from%2….&from=docs/cli/serve/log-stream
/docs/cli/serve/server-startlms server start | LM Studio DocsStart the LM Studio local server with customizable port and logging options./api/og?title=lms%20server%20start&from=docs/cli/serve/server-start…0port%20and%20logging%20options.lms server start | LM Studio DocsStart the LM Studio local server with customizable port and logging options./api/og?title=lms%20server%20start&description=Start%20the%20LM%20S…from=docs/cli/serve/server-start
/docs/cli/serve/server-statuslms server status | LM Studio DocsCheck the status of your running LM Studio server instance./api/og?title=lms%20server%20status&from=docs/cli/serve/server-stat…LM%20Studio%20server%20instance.lms server status | LM Studio DocsCheck the status of your running LM Studio server instance./api/og?title=lms%20server%20status&description=Check%20the%20statu…rom=docs/cli/serve/server-status
/docs/cli/serve/server-stoplms server stop | LM Studio DocsStop the running LM Studio server instance./api/og?title=lms%20server%20stop&from=docs/cli/serve/server-stop&d…LM%20Studio%20server%20instance.lms server stop | LM Studio DocsStop the running LM Studio server instance./api/og?title=lms%20server%20stop&description=Stop%20the%20running%…&from=docs/cli/serve/server-stop
/docs/developerLM Studio Developer Docs | LM Studio DocsBuild with LM Studio's local APIs and SDKs — TypeScript, Python, REST, and OpenAI and Anthropic-compatible endpoints./api/og?title=LM%20Studio%20Developer%20Docs&from=docs/developer&de…nthropic-compatible%20endpoints.LM Studio Developer Docs | LM Studio DocsBuild with LM Studio's local APIs and SDKs — TypeScript, Python, REST, and OpenAI and Anthropic-compatible endpoints./api/og?title=LM%20Studio%20Developer%20Docs&description=Build%20wi…20endpoints.&from=docs/developer
/docs/developer/anthropic-compatAnthropic Compatibility Endpoints | LM Studio DocsSend Messages requests using the Anthropic-compatible API./api/og?title=Anthropic%20Compatibility%20Endpoints&from=docs/devel…he%20Anthropic-compatible%20API.Anthropic Compatibility Endpoints | LM Studio DocsSend Messages requests using the Anthropic-compatible API./api/og?title=Anthropic%20Compatibility%20Endpoints&description=Sen…=docs/developer/anthropic-compat
/docs/developer/anthropic-compat/messagesMessages | LM Studio DocsSend a Messages request and get the assistant's response./api/og?title=Messages&from=docs/developer/anthropic-compat/message…0the%20assistant%27s%20response.Messages | LM Studio DocsSend a Messages request and get the assistant's response./api/og?title=Messages&description=Send%20a%20Messages%20request%20…eloper/anthropic-compat/messages
/docs/developer/api-changelogAPI Changelog | LM Studio DocsUpdates and changes to the LM Studio API./api/og?title=API%20Changelog&from=docs/developer/api-changelog&des…%20to%20the%20LM%20Studio%20API.API Changelog | LM Studio DocsUpdates and changes to the LM Studio API./api/og?title=API%20Changelog&description=Updates%20and%20changes%2…rom=docs/developer/api-changelog
/docs/developer/core/authenticationAuthentication | LM Studio DocsUsing API Tokens in LM Studio/api/og?title=Authentication&from=docs/developer/core/authenticatio…0API%20Tokens%20in%20LM%20StudioAuthentication | LM Studio DocsUsing API Tokens in LM Studio/api/og?title=Authentication&description=Using%20API%20Tokens%20in%…cs/developer/core/authentication
/docs/developer/core/headlessRun LM Studio as a service (headless) | LM Studio DocsGUI-less operation of LM Studio: run in the background, start on machine login, and load models on demand/api/og?title=Run%20LM%20Studio%20as%20a%20service%20(headless)&fro…nd%20load%20models%20on%20demandRun LM Studio as a service (headless) | LM Studio DocsGUI-less operation of LM Studio: run in the background, start on machine login, and load models on demand/api/og?title=Run%20LM%20Studio%20as%20a%20service%20(headless)&des…rom=docs/developer/core/headless
/docs/developer/core/headless_llmsterSetup llmster as a Startup Task on Linux | LM Studio DocsConfigure llmster to run on startup using systemctl on Linux/api/og?title=Setup%20llmster%20as%20a%20Startup%20Task%20on%20Linu…20using%20systemctl%20on%20LinuxSetup llmster as a Startup Task on Linux | LM Studio DocsConfigure llmster to run on startup using systemctl on Linux/api/og?title=Setup%20llmster%20as%20a%20Startup%20Task%20on%20Linu…/developer/core/headless_llmster
/docs/developer/core/lmlinkUsing LM Link | LM Studio DocsUse a remote device's model via the REST API with LM Link/api/og?title=Using%20LM%20Link&from=docs/developer/core/lmlink&des…%20REST%20API%20with%20LM%20LinkUsing LM Link | LM Studio DocsUse a remote device's model via the REST API with LM Link/api/og?title=Using%20LM%20Link&description=Use%20a%20remote%20devi…&from=docs/developer/core/lmlink
/docs/developer/core/mcpUsing MCP via API | LM Studio DocsLearn how to use Model Context Protocol (MCP) servers with LM Studio API./api/og?title=Using%20MCP%20via%20API&from=docs/developer/core/mcp&…vers%20with%20LM%20Studio%20API.Using MCP via API | LM Studio DocsLearn how to use Model Context Protocol (MCP) servers with LM Studio API./api/og?title=Using%20MCP%20via%20API&description=Learn%20how%20to%…PI.&from=docs/developer/core/mcp
/docs/developer/core/serverLM Studio as a Local LLM API Server | LM Studio DocsRun an LLM API server on localhost with LM Studio/api/og?title=LM%20Studio%20as%20a%20Local%20LLM%20API%20Server&fro…20localhost%20with%20LM%20StudioLM Studio as a Local LLM API Server | LM Studio DocsRun an LLM API server on localhost with LM Studio/api/og?title=LM%20Studio%20as%20a%20Local%20LLM%20API%20Server&des…&from=docs/developer/core/server
/docs/developer/core/server/serve-on-networkServe on Local Network | LM Studio DocsAllow other devices in your network use this LM Studio API server/api/og?title=Serve%20on%20Local%20Network&from=docs/developer/core…his%20LM%20Studio%20API%20serverServe on Local Network | LM Studio DocsAllow other devices in your network use this LM Studio API server/api/og?title=Serve%20on%20Local%20Network&description=Allow%20othe…per/core/server/serve-on-network
/docs/developer/core/server/settingsServer Settings | LM Studio DocsConfigure server settings for LM Studio API Server/api/og?title=Server%20Settings&from=docs/developer/core/server/set…for%20LM%20Studio%20API%20ServerServer Settings | LM Studio DocsConfigure server settings for LM Studio API Server/api/og?title=Server%20Settings&description=Configure%20server%20se…s/developer/core/server/settings
/docs/developer/core/ttl-and-auto-evictIdle TTL and Auto-Evict | LM Studio DocsOptionally auto-unload idle models after a certain amount of time (TTL)/api/og?title=Idle%20TTL%20and%20Auto-Evict&from=docs/developer/cor…ain%20amount%20of%20time%20(TTL)Idle TTL and Auto-Evict | LM Studio DocsOptionally auto-unload idle models after a certain amount of time (TTL)/api/og?title=Idle%20TTL%20and%20Auto-Evict&description=Optionally%…eveloper/core/ttl-and-auto-evict
/docs/developer/openai-compatOpenAI Compatibility Endpoints | LM Studio DocsSend requests to Responses, Chat Completions (text and images), Completions, and Embeddings endpoints./api/og?title=OpenAI%20Compatibility%20Endpoints&from=docs/develope…%20and%20Embeddings%20endpoints.OpenAI Compatibility Endpoints | LM Studio DocsSend requests to Responses, Chat Completions (text and images), Completions, and Embeddings endpoints./api/og?title=OpenAI%20Compatibility%20Endpoints&description=Send%2…rom=docs/developer/openai-compat
/docs/developer/openai-compat/chat-completionsChat Completions | LM Studio DocsSend a chat history and get the assistant's response./api/og?title=Chat%20Completions&from=docs/developer/openai-compat/…0the%20assistant%27s%20response.Chat Completions | LM Studio DocsSend a chat history and get the assistant's response./api/og?title=Chat%20Completions&description=Send%20a%20chat%20hist…r/openai-compat/chat-completions
/docs/developer/openai-compat/completionsCompletions (Legacy) | LM Studio DocsText completion for base models (legacy OpenAI endpoint)./api/og?title=Completions%20(Legacy)&from=docs/developer/openai-com…%20(legacy%20OpenAI%20endpoint).Completions (Legacy) | LM Studio DocsText completion for base models (legacy OpenAI endpoint)./api/og?title=Completions%20(Legacy)&description=Text%20completion%…eloper/openai-compat/completions
/docs/developer/openai-compat/embeddingsEmbeddings | LM Studio DocsGenerate embedding vectors from input text./api/og?title=Embeddings&from=docs/developer/openai-compat/embeddin…20vectors%20from%20input%20text.Embeddings | LM Studio DocsGenerate embedding vectors from input text./api/og?title=Embeddings&description=Generate%20embedding%20vectors…veloper/openai-compat/embeddings
/docs/developer/openai-compat/modelsList Models | LM Studio DocsList available models via the OpenAI-compatible endpoint./api/og?title=List%20Models&from=docs/developer/openai-compat/model…%20OpenAI-compatible%20endpoint.List Models | LM Studio DocsList available models via the OpenAI-compatible endpoint./api/og?title=List%20Models&description=List%20available%20models%2…s/developer/openai-compat/models
/docs/developer/openai-compat/responsesResponses | LM Studio DocsCreate responses with support for streaming, reasoning, prior response state, and optional Remote MCP tools./api/og?title=Responses&from=docs/developer/openai-compat/responses…optional%20Remote%20MCP%20tools.Responses | LM Studio DocsCreate responses with support for streaming, reasoning, prior response state, and optional Remote MCP tools./api/og?title=Responses&description=Create%20responses%20with%20sup…eveloper/openai-compat/responses
/docs/developer/openai-compat/structured-outputStructured Output | LM Studio DocsEnforce LLM response formats using JSON schemas./api/og?title=Structured%20Output&from=docs/developer/openai-compat…ormats%20using%20JSON%20schemas.Structured Output | LM Studio DocsEnforce LLM response formats using JSON schemas./api/og?title=Structured%20Output&description=Enforce%20LLM%20respo…/openai-compat/structured-output
/docs/developer/openai-compat/toolsTool Use | LM Studio DocsEnable LLMs to interact with external functions and APIs./api/og?title=Tool%20Use&from=docs/developer/openai-compat/tools&de…ternal%20functions%20and%20APIs.Tool Use | LM Studio DocsEnable LLMs to interact with external functions and APIs./api/og?title=Tool%20Use&description=Enable%20LLMs%20to%20interact%…cs/developer/openai-compat/tools
/docs/developer/restLM Studio API | LM Studio DocsLM Studio's REST API for local inference and model management/api/og?title=LM%20Studio%20API&from=docs/developer/rest&descriptio…rence%20and%20model%20managementLM Studio API | LM Studio DocsLM Studio's REST API for local inference and model management/api/og?title=LM%20Studio%20API&description=LM%20Studio%27s%20REST%…agement&from=docs/developer/rest
/docs/developer/rest/chatChat with a model | LM Studio DocsSend a message to a model and receive a response. Supports MCP integration./api/og?title=Chat%20with%20a%20model&from=docs/developer/rest/chat…%20Supports%20MCP%20integration.Chat with a model | LM Studio DocsSend a message to a model and receive a response. Supports MCP integration./api/og?title=Chat%20with%20a%20model&description=Send%20a%20messag…n.&from=docs/developer/rest/chat
/docs/developer/rest/downloadDownload a model | LM Studio DocsDownload LLMs and embedding models/api/og?title=Download%20a%20model&from=docs/developer/rest/downloa…0LLMs%20and%20embedding%20modelsDownload a model | LM Studio DocsDownload LLMs and embedding models/api/og?title=Download%20a%20model&description=Download%20LLMs%20an…rom=docs/developer/rest/download
/docs/developer/rest/download-statusGet download status | LM Studio DocsGet the status of model downloads/api/og?title=Get%20download%20status&from=docs/developer/rest/down…0status%20of%20model%20downloadsGet download status | LM Studio DocsGet the status of model downloads/api/og?title=Get%20download%20status&description=Get%20the%20statu…s/developer/rest/download-status
/docs/developer/rest/endpointsREST API v0 | LM Studio DocsThe REST API includes enhanced stats such as Token / Second and Time To First Token (TTFT), as well as rich information about models such as loaded vs unloaded, max context, quantization, and more./api/og?title=REST%20API%20v0&from=docs/developer/rest/endpoints&de…t,%20quantization,%20and%20more.REST API v0 | LM Studio DocsThe REST API includes enhanced stats such as Token / Second and Time To First Token (TTFT), as well as rich information about models such as loaded vs unloaded, max context, quantization, and more./api/og?title=REST%20API%20v0&description=The%20REST%20API%20includ…om=docs/developer/rest/endpoints
/docs/developer/rest/listList your models | LM Studio DocsGet a list of available models on your system, including both LLMs and embedding models./api/og?title=List%20your%20models&from=docs/developer/rest/list&de…LLMs%20and%20embedding%20models.List your models | LM Studio DocsGet a list of available models on your system, including both LLMs and embedding models./api/og?title=List%20your%20models&description=Get%20a%20list%20of%…s.&from=docs/developer/rest/list
/docs/developer/rest/loadLoad a model | LM Studio DocsLoad an LLM or Embedding model into memory with custom configuration for inference/api/og?title=Load%20a%20model&from=docs/developer/rest/load&descri…0configuration%20for%20inferenceLoad a model | LM Studio DocsLoad an LLM or Embedding model into memory with custom configuration for inference/api/og?title=Load%20a%20model&description=Load%20an%20LLM%20or%20E…ce&from=docs/developer/rest/load
/docs/developer/rest/quickstartGet up and running with the LM Studio API | LM Studio DocsDownload a model and start a simple Chat session using the REST API/api/og?title=Get%20up%20and%20running%20with%20the%20LM%20Studio%2…ssion%20using%20the%20REST%20APIGet up and running with the LM Studio API | LM Studio DocsDownload a model and start a simple Chat session using the REST API/api/og?title=Get%20up%20and%20running%20with%20the%20LM%20Studio%2…m=docs/developer/rest/quickstart
/docs/developer/rest/stateful-chatsStateful Chats | LM Studio DocsLearn how to maintain conversation context across multiple requests/api/og?title=Stateful%20Chats&from=docs/developer/rest/stateful-ch…t%20across%20multiple%20requestsStateful Chats | LM Studio DocsLearn how to maintain conversation context across multiple requests/api/og?title=Stateful%20Chats&description=Learn%20how%20to%20maint…cs/developer/rest/stateful-chats
/docs/developer/rest/streaming-eventsStreaming events | LM Studio DocsWhen you chat with a model with stream set to true, the response is sent as a stream of events using Server-Sent Events (SSE)./api/og?title=Streaming%20events&from=docs/developer/rest/streaming…%20Server-Sent%20Events%20(SSE).Streaming events | LM Studio DocsWhen you chat with a model with stream set to true, the response is sent as a stream of events using Server-Sent Events (SSE)./api/og?title=Streaming%20events&description=When%20you%20chat%20wi…/developer/rest/streaming-events
/docs/developer/rest/unloadUnload a model | LM Studio DocsUnload a loaded model from memory/api/og?title=Unload%20a%20model&from=docs/developer/rest/unload&de…20loaded%20model%20from%20memoryUnload a model | LM Studio DocsUnload a loaded model from memory/api/og?title=Unload%20a%20model&description=Unload%20a%20loaded%20…&from=docs/developer/rest/unload
/docs/integrationsIntegrations | LM Studio DocsUse LM Studio with external tools and apps./api/og?title=Integrations&from=docs/integrations&description=Use%2…20external%20tools%20and%20apps.Integrations | LM Studio DocsUse LM Studio with external tools and apps./api/og?title=Integrations&description=Use%20LM%20Studio%20with%20e…d%20apps.&from=docs/integrations
/docs/integrations/claude-codeClaude Code | LM Studio DocsUse Claude Code with LM Studio/api/og?title=Claude%20Code&from=docs/integrations/claude-code&desc…aude%20Code%20with%20LM%20StudioClaude Code | LM Studio DocsUse Claude Code with LM Studio/api/og?title=Claude%20Code&description=Use%20Claude%20Code%20with%…om=docs/integrations/claude-code
/docs/integrations/codexCodex | LM Studio DocsUse Codex with LM Studio/api/og?title=Codex&from=docs/integrations/codex&description=Use%20Codex%20with%20LM%20StudioCodex | LM Studio DocsUse Codex with LM Studio/api/og?title=Codex&description=Use%20Codex%20with%20LM%20Studio&from=docs/integrations/codex
/docs/integrations/lmlinkUsing LM Link with Integrations | LM Studio DocsUse a remote device's model with coding tools like Claude Code and Codex via LM Link/api/og?title=Using%20LM%20Link%20with%20Integrations&from=docs/int…%20and%20Codex%20via%20LM%20LinkUsing LM Link with Integrations | LM Studio DocsUse a remote device's model with coding tools like Claude Code and Codex via LM Link/api/og?title=Using%20LM%20Link%20with%20Integrations&description=U…nk&from=docs/integrations/lmlink
/docs/lmlinkLM Link | LM Studio DocsUse LM Link to access your local models wherever you are, over a secure and end-to-end encrypted connection./api/og?title=LM%20Link&from=docs/lmlink&description=Use%20LM%20Lin…to-end%20encrypted%20connection.LM Link | LM Studio DocsUse LM Link to access your local models wherever you are, over a secure and end-to-end encrypted connection./api/og?title=LM%20Link&description=Use%20LM%20Link%20to%20access%2…d%20connection.&from=docs/lmlink
/docs/lmlink/basicsSetup a link | LM Studio DocsProvision your first link/api/og?title=Setup%20a%20link&from=docs/lmlink/basics&description=Provision%20your%20first%20linkSetup a link | LM Studio DocsProvision your first link/api/og?title=Setup%20a%20link&description=Provision%20your%20first%20link&from=docs/lmlink/basics
/docs/lmlink/basics/add-deviceAdd a Device | LM Studio DocsConnect a new device to your LM Link./api/og?title=Add%20a%20Device&from=docs/lmlink/basics/add-device&d…0device%20to%20your%20LM%20Link.Add a Device | LM Studio DocsConnect a new device to your LM Link./api/og?title=Add%20a%20Device&description=Connect%20a%20new%20devi…om=docs/lmlink/basics/add-device
/docs/lmlink/basics/faqFrequently Asked Questions | LM Studio DocsAnswers to common questions about LM Link./api/og?title=Frequently%20Asked%20Questions&from=docs/lmlink/basic…20questions%20about%20LM%20Link.Frequently Asked Questions | LM Studio DocsAnswers to common questions about LM Link./api/og?title=Frequently%20Asked%20Questions&description=Answers%20…ink.&from=docs/lmlink/basics/faq
/docs/lmlink/basics/preferred-deviceSet a preferred device | LM Studio DocsChoose a preferred device to load models via LM Link/api/og?title=Set%20a%20preferred%20device&from=docs/lmlink/basics/…0load%20models%20via%20LM%20LinkSet a preferred device | LM Studio DocsChoose a preferred device to load models via LM Link/api/og?title=Set%20a%20preferred%20device&description=Choose%20a%2…s/lmlink/basics/preferred-device
/docs/pythonlmstudio-python (Python SDK) | LM Studio DocsGetting started with LM Studio's Python SDK/api/og?title=lmstudio-python%20(Python%20SDK)&from=docs/python&des…20LM%20Studio%27s%20Python%20SDKlmstudio-python (Python SDK) | LM Studio DocsGetting started with LM Studio's Python SDK/api/og?title=lmstudio-python%20(Python%20SDK)&description=Getting%…%20Python%20SDK&from=docs/python
/docs/python/agent/actThe .act() call | LM Studio DocsHow to use the .act() call to turn LLMs into autonomous agents that can perform tasks on your local machine./api/og?title=The%20.act()%20call&from=docs/python/agent/act&descri…s%20on%20your%20local%20machine.The .act() call | LM Studio DocsHow to use the .act() call to turn LLMs into autonomous agents that can perform tasks on your local machine./api/og?title=The%20.act()%20call&description=How%20to%20use%20the%…hine.&from=docs/python/agent/act
/docs/python/agent/toolsTool Definition | LM Studio DocsDefine tools to be called by the LLM, and pass them to the model in the act() call./api/og?title=Tool%20Definition&from=docs/python/agent/tools&descri…model%20in%20the%20act()%20call.Tool Definition | LM Studio DocsDefine tools to be called by the LLM, and pass them to the model in the act() call./api/og?title=Tool%20Definition&description=Define%20tools%20to%20b…ll.&from=docs/python/agent/tools
/docs/python/embeddingEmbedding | LM Studio DocsGenerate text embeddings from input text/api/og?title=Embedding&from=docs/python/embedding&description=Gene…embeddings%20from%20input%20textEmbedding | LM Studio DocsGenerate text embeddings from input text/api/og?title=Embedding&description=Generate%20text%20embeddings%20…0text&from=docs/python/embedding
/docs/python/getting-started/authenticationAuthentication | LM Studio DocsUsing API Tokens in LM Studio/api/og?title=Authentication&from=docs/python/getting-started/authe…0API%20Tokens%20in%20LM%20StudioAuthentication | LM Studio DocsUsing API Tokens in LM Studio/api/og?title=Authentication&description=Using%20API%20Tokens%20in%…n/getting-started/authentication
/docs/python/getting-started/project-setupProject Setup | LM Studio DocsSet up your lmstudio-python app or script./api/og?title=Project%20Setup&from=docs/python/getting-started/proj…udio-python%20app%20or%20script.Project Setup | LM Studio DocsSet up your lmstudio-python app or script./api/og?title=Project%20Setup&description=Set%20up%20your%20lmstudi…on/getting-started/project-setup
/docs/python/getting-started/replUsing lmstudio-python in REPL | LM Studio DocsYou can use lmstudio-python in REPL (Read-Eval-Print Loop) to interact with LLMs, manage models, and more./api/og?title=Using%20lmstudio-python%20in%20REPL&from=docs/python/…20manage%20models,%20and%20more.Using lmstudio-python in REPL | LM Studio DocsYou can use lmstudio-python in REPL (Read-Eval-Print Loop) to interact with LLMs, manage models, and more./api/og?title=Using%20lmstudio-python%20in%20REPL&description=You%2…docs/python/getting-started/repl
/docs/python/llm-prediction/cancelling-predictionsCancelling Predictions | LM Studio DocsStop an ongoing prediction in lmstudio-python/api/og?title=Cancelling%20Predictions&from=docs/python/llm-predict…rediction%20in%20lmstudio-pythonCancelling Predictions | LM Studio DocsStop an ongoing prediction in lmstudio-python/api/og?title=Cancelling%20Predictions&description=Stop%20an%20ongo…rediction/cancelling-predictions
/docs/python/llm-prediction/chat-completionChat Completions | LM Studio DocsAPIs for a multi-turn chat conversations with an LLM/api/og?title=Chat%20Completions&from=docs/python/llm-prediction/ch…0conversations%20with%20an%20LLMChat Completions | LM Studio DocsAPIs for a multi-turn chat conversations with an LLM/api/og?title=Chat%20Completions&description=APIs%20for%20a%20multi…n/llm-prediction/chat-completion
/docs/python/llm-prediction/completionText Completions | LM Studio DocsProvide a string input for the model to complete/api/og?title=Text%20Completions&from=docs/python/llm-prediction/co…or%20the%20model%20to%20completeText Completions | LM Studio DocsProvide a string input for the model to complete/api/og?title=Text%20Completions&description=Provide%20a%20string%2…python/llm-prediction/completion
/docs/python/llm-prediction/image-inputImage Input | LM Studio DocsAPI for passing images as input to the model/api/og?title=Image%20Input&from=docs/python/llm-prediction/image-i…%20as%20input%20to%20the%20modelImage Input | LM Studio DocsAPI for passing images as input to the model/api/og?title=Image%20Input&description=API%20for%20passing%20image…ython/llm-prediction/image-input
/docs/python/llm-prediction/parametersConfiguring the Model | LM Studio DocsAPIs for setting inference-time and load-time parameters for your model/api/og?title=Configuring%20the%20Model&from=docs/python/llm-predic…0parameters%20for%20your%20modelConfiguring the Model | LM Studio DocsAPIs for setting inference-time and load-time parameters for your model/api/og?title=Configuring%20the%20Model&description=APIs%20for%20se…python/llm-prediction/parameters
/docs/python/llm-prediction/speculative-decodingSpeculative Decoding | LM Studio DocsAPI to use a draft model in speculative decoding in lmstudio-python/api/og?title=Speculative%20Decoding&from=docs/python/llm-predictio…0decoding%20in%20lmstudio-pythonSpeculative Decoding | LM Studio DocsAPI to use a draft model in speculative decoding in lmstudio-python/api/og?title=Speculative%20Decoding&description=API%20to%20use%20a…-prediction/speculative-decoding
/docs/python/llm-prediction/structured-responseStructured Response | LM Studio DocsEnforce a structured response from the model using Pydantic models or JSON Schema/api/og?title=Structured%20Response&from=docs/python/llm-prediction…ic%20models%20or%20JSON%20SchemaStructured Response | LM Studio DocsEnforce a structured response from the model using Pydantic models or JSON Schema/api/og?title=Structured%20Response&description=Enforce%20a%20struc…m-prediction/structured-response
/docs/python/llm-prediction/working-with-chatsWorking with Chats | LM Studio DocsAPIs for representing a chat conversation with an LLM/api/og?title=Working%20with%20Chats&from=docs/python/llm-predictio…20conversation%20with%20an%20LLMWorking with Chats | LM Studio DocsAPIs for representing a chat conversation with an LLM/api/og?title=Working%20with%20Chats&description=APIs%20for%20repre…lm-prediction/working-with-chats
/docs/python/manage-models/list-downloadedList Downloaded Models | LM Studio DocsAPIs to list the available models in a given local environment/api/og?title=List%20Downloaded%20Models&from=docs/python/manage-mo…0a%20given%20local%20environmentList Downloaded Models | LM Studio DocsAPIs to list the available models in a given local environment/api/og?title=List%20Downloaded%20Models&description=APIs%20to%20li…on/manage-models/list-downloaded
/docs/python/manage-models/list-loadedList Loaded Models | LM Studio DocsQuery which models are currently loaded/api/og?title=List%20Loaded%20Models&from=docs/python/manage-models…odels%20are%20currently%20loadedList Loaded Models | LM Studio DocsQuery which models are currently loaded/api/og?title=List%20Loaded%20Models&description=Query%20which%20mo…python/manage-models/list-loaded
/docs/python/manage-models/loadingManage Models in Memory | LM Studio DocsAPIs to load, access, and unload models from memory/api/og?title=Manage%20Models%20in%20Memory&from=docs/python/manage…0unload%20models%20from%20memoryManage Models in Memory | LM Studio DocsAPIs to load, access, and unload models from memory/api/og?title=Manage%20Models%20in%20Memory&description=APIs%20to%2…ocs/python/manage-models/loading
/docs/python/model-info/get-context-lengthGet Context Length | LM Studio DocsAPI to get the maximum context length of a model./api/og?title=Get%20Context%20Length&from=docs/python/model-info/ge…ntext%20length%20of%20a%20model.Get Context Length | LM Studio DocsAPI to get the maximum context length of a model./api/og?title=Get%20Context%20Length&description=API%20to%20get%20t…on/model-info/get-context-length
/docs/python/model-info/get-load-configGet Load Config | LM Studio DocsGet the load configuration of the model/api/og?title=Get%20Load%20Config&from=docs/python/model-info/get-l…configuration%20of%20the%20modelGet Load Config | LM Studio DocsGet the load configuration of the model/api/og?title=Get%20Load%20Config&description=Get%20the%20load%20co…ython/model-info/get-load-config
/docs/python/model-info/get-model-infoGet Model Info | LM Studio DocsGet information about the model/api/og?title=Get%20Model%20Info&from=docs/python/model-info/get-mo…nformation%20about%20the%20modelGet Model Info | LM Studio DocsGet information about the model/api/og?title=Get%20Model%20Info&description=Get%20information%20ab…python/model-info/get-model-info
/docs/python/tokenizationTokenization | LM Studio DocsTokenize text using a model's tokenizer/api/og?title=Tokenization&from=docs/python/tokenization&descriptio…sing%20a%20model%27s%20tokenizerTokenization | LM Studio DocsTokenize text using a model's tokenizer/api/og?title=Tokenization&description=Tokenize%20text%20using%20a%…er&from=docs/python/tokenization
/docs/typescriptlmstudio-js (TypeScript SDK) | LM Studio DocsGetting started with LM Studio's Typescript / JavaScript SDK/api/og?title=lmstudio-js%20(TypeScript%20SDK)&from=docs/typescript…ypescript%20/%20JavaScript%20SDKlmstudio-js (TypeScript SDK) | LM Studio DocsGetting started with LM Studio's Typescript / JavaScript SDK/api/og?title=lmstudio-js%20(TypeScript%20SDK)&description=Getting%…cript%20SDK&from=docs/typescript
/docs/typescript/agent/actThe .act() call | LM Studio DocsHow to use the .act() call to turn LLMs into autonomous agents that can perform tasks on your local machine./api/og?title=The%20.act()%20call&from=docs/typescript/agent/act&de…s%20on%20your%20local%20machine.The .act() call | LM Studio DocsHow to use the .act() call to turn LLMs into autonomous agents that can perform tasks on your local machine./api/og?title=The%20.act()%20call&description=How%20to%20use%20the%….&from=docs/typescript/agent/act
/docs/typescript/agent/toolsTool Definition | LM Studio DocsDefine tools with the tool() function and pass them to the model in the act() call./api/og?title=Tool%20Definition&from=docs/typescript/agent/tools&de…model%20in%20the%20act()%20call.Tool Definition | LM Studio DocsDefine tools with the tool() function and pass them to the model in the act() call./api/og?title=Tool%20Definition&description=Define%20tools%20with%2…from=docs/typescript/agent/tools
/docs/typescript/api-reference/llm-load-model-configLLMLoadModelConfig | LM Studio DocsAPI Reference for LLMLoadModelConfig/api/og?title=LLMLoadModelConfig&from=docs/typescript/api-reference…rence%20for%20LLMLoadModelConfigLLMLoadModelConfig | LM Studio DocsAPI Reference for LLMLoadModelConfig/api/og?title=LLMLoadModelConfig&description=API%20Reference%20for%…-reference/llm-load-model-config
/docs/typescript/api-reference/llm-prediction-config-inputLLMPredictionConfigInput | LM Studio Docs/api/og?title=LLMPredictionConfigInput&from=docs/typescript/api-ref…iction-config-input&description=LLMPredictionConfigInput | LM Studio Docs/api/og?title=LLMPredictionConfigInput&description=&from=docs/types…ence/llm-prediction-config-input
/docs/typescript/authenticationAuthentication | LM Studio DocsUsing API Tokens in LM Studio/api/og?title=Authentication&from=docs/typescript/authentication&de…0API%20Tokens%20in%20LM%20StudioAuthentication | LM Studio DocsUsing API Tokens in LM Studio/api/og?title=Authentication&description=Using%20API%20Tokens%20in%…m=docs/typescript/authentication
/docs/typescript/embeddingEmbedding | LM Studio DocsGenerate text embeddings from input text/api/og?title=Embedding&from=docs/typescript/embedding&description=…embeddings%20from%20input%20textEmbedding | LM Studio DocsGenerate text embeddings from input text/api/og?title=Embedding&description=Generate%20text%20embeddings%20…t&from=docs/typescript/embedding
/docs/typescript/llm-prediction/cancelling-predictionsCancelling Predictions | LM Studio DocsStop an ongoing prediction in lmstudio-js/api/og?title=Cancelling%20Predictions&from=docs/typescript/llm-pre…%20prediction%20in%20lmstudio-jsCancelling Predictions | LM Studio DocsStop an ongoing prediction in lmstudio-js/api/og?title=Cancelling%20Predictions&description=Stop%20an%20ongo…rediction/cancelling-predictions
/docs/typescript/llm-prediction/chat-completionChat Completions | LM Studio DocsAPIs for a multi-turn chat conversations with an LLM/api/og?title=Chat%20Completions&from=docs/typescript/llm-predictio…0conversations%20with%20an%20LLMChat Completions | LM Studio DocsAPIs for a multi-turn chat conversations with an LLM/api/og?title=Chat%20Completions&description=APIs%20for%20a%20multi…t/llm-prediction/chat-completion
/docs/typescript/llm-prediction/completionGenerate Completions | LM Studio DocsProvide a string input for the model to complete/api/og?title=Generate%20Completions&from=docs/typescript/llm-predi…or%20the%20model%20to%20completeGenerate Completions | LM Studio DocsProvide a string input for the model to complete/api/og?title=Generate%20Completions&description=Provide%20a%20stri…script/llm-prediction/completion
/docs/typescript/llm-prediction/image-inputImage Input | LM Studio DocsAPI for passing images as input to the model/api/og?title=Image%20Input&from=docs/typescript/llm-prediction/ima…%20as%20input%20to%20the%20modelImage Input | LM Studio DocsAPI for passing images as input to the model/api/og?title=Image%20Input&description=API%20for%20passing%20image…cript/llm-prediction/image-input
/docs/typescript/llm-prediction/parametersConfiguring the Model | LM Studio DocsAPIs for setting inference-time and load-time parameters for your model/api/og?title=Configuring%20the%20Model&from=docs/typescript/llm-pr…0parameters%20for%20your%20modelConfiguring the Model | LM Studio DocsAPIs for setting inference-time and load-time parameters for your model/api/og?title=Configuring%20the%20Model&description=APIs%20for%20se…script/llm-prediction/parameters
/docs/typescript/llm-prediction/speculative-decodingSpeculative Decoding | LM Studio DocsAPI to use a draft model in speculative decoding in lmstudio-js/api/og?title=Speculative%20Decoding&from=docs/typescript/llm-predi…ve%20decoding%20in%20lmstudio-jsSpeculative Decoding | LM Studio DocsAPI to use a draft model in speculative decoding in lmstudio-js/api/og?title=Speculative%20Decoding&description=API%20to%20use%20a…-prediction/speculative-decoding
/docs/typescript/llm-prediction/structured-responseStructured Response | LM Studio DocsEnforce a structured response from the model using Pydantic (Python), Zod (TypeScript), or JSON Schema/api/og?title=Structured%20Response&from=docs/typescript/llm-predic…ypeScript),%20or%20JSON%20SchemaStructured Response | LM Studio DocsEnforce a structured response from the model using Pydantic (Python), Zod (TypeScript), or JSON Schema/api/og?title=Structured%20Response&description=Enforce%20a%20struc…m-prediction/structured-response
/docs/typescript/llm-prediction/working-with-chatsWorking with Chats | LM Studio DocsAPIs for representing a chat conversation with an LLM/api/og?title=Working%20with%20Chats&from=docs/typescript/llm-predi…20conversation%20with%20an%20LLMWorking with Chats | LM Studio DocsAPIs for representing a chat conversation with an LLM/api/og?title=Working%20with%20Chats&description=APIs%20for%20repre…lm-prediction/working-with-chats
/docs/typescript/manage-models/list-downloadedList Local Models | LM Studio DocsAPIs to list the available models in a given local environment/api/og?title=List%20Local%20Models&from=docs/typescript/manage-mod…0a%20given%20local%20environmentList Local Models | LM Studio DocsAPIs to list the available models in a given local environment/api/og?title=List%20Local%20Models&description=APIs%20to%20list%20…pt/manage-models/list-downloaded
/docs/typescript/manage-models/list-loadedList Loaded Models | LM Studio DocsQuery which models are currently loaded/api/og?title=List%20Loaded%20Models&from=docs/typescript/manage-mo…odels%20are%20currently%20loadedList Loaded Models | LM Studio DocsQuery which models are currently loaded/api/og?title=List%20Loaded%20Models&description=Query%20which%20mo…script/manage-models/list-loaded
/docs/typescript/manage-models/loadingManage Models in Memory | LM Studio DocsAPIs to load, access, and unload models from memory/api/og?title=Manage%20Models%20in%20Memory&from=docs/typescript/ma…0unload%20models%20from%20memoryManage Models in Memory | LM Studio DocsAPIs to load, access, and unload models from memory/api/og?title=Manage%20Models%20in%20Memory&description=APIs%20to%2…typescript/manage-models/loading
/docs/typescript/model-info/get-context-lengthGet Context Length | LM Studio DocsAPI to get the maximum context length of a model./api/og?title=Get%20Context%20Length&from=docs/typescript/model-inf…ntext%20length%20of%20a%20model.Get Context Length | LM Studio DocsAPI to get the maximum context length of a model./api/og?title=Get%20Context%20Length&description=API%20to%20get%20t…pt/model-info/get-context-length
/docs/typescript/model-info/get-model-infoGet Model Info | LM Studio DocsGet information about the model/api/og?title=Get%20Model%20Info&from=docs/typescript/model-info/ge…nformation%20about%20the%20modelGet Model Info | LM Studio DocsGet information about the model/api/og?title=Get%20Model%20Info&description=Get%20information%20ab…script/model-info/get-model-info
/docs/typescript/pluginsIntroduction to Plugins | LM Studio DocsA brief introduction to making plugins for LM Studio using TypeScript./api/og?title=Introduction%20to%20Plugins&from=docs/typescript/plug…M%20Studio%20using%20TypeScript.Introduction to Plugins | LM Studio DocsA brief introduction to making plugins for LM Studio using TypeScript./api/og?title=Introduction%20to%20Plugins&description=A%20brief%20i…pt.&from=docs/typescript/plugins
/docs/typescript/plugins/custom-configurationIntroduction | LM Studio DocsAdd custom configurations to LM Studio plugins using TypeScript/api/og?title=Introduction&from=docs/typescript/plugins/custom-conf…o%20plugins%20using%20TypeScriptIntroduction | LM Studio DocsAdd custom configurations to LM Studio plugins using TypeScript/api/og?title=Introduction&description=Add%20custom%20configuration…ipt/plugins/custom-configuration
/docs/typescript/plugins/custom-configuration/accessing-configAccessing Configuration | LM Studio Docs/api/og?title=Accessing%20Configuration&from=docs/typescript/plugin…on/accessing-config&description=Accessing Configuration | LM Studio Docs/api/og?title=Accessing%20Configuration&description=&from=docs/type…m-configuration/accessing-config
/docs/typescript/plugins/custom-configuration/config-tsconfig.ts File | LM Studio Docs/api/og?title=config.ts%20File&from=docs/typescript/plugins/custom-…iguration/config-ts&description=config.ts File | LM Studio Docs/api/og?title=config.ts%20File&description=&from=docs/typescript/pl…s/custom-configuration/config-ts
/docs/typescript/plugins/custom-configuration/defining-new-fieldsDefining New Fields | LM Studio Docs/api/og?title=Defining%20New%20Fields&from=docs/typescript/plugins/…defining-new-fields&description=Defining New Fields | LM Studio Docs/api/og?title=Defining%20New%20Fields&description=&from=docs/typesc…onfiguration/defining-new-fields
/docs/typescript/plugins/dependenciesUsing npm Dependencies | LM Studio DocsHow to use npm packages in LM Studio plugins/api/og?title=Using%20npm%20Dependencies&from=docs/typescript/plugi…ges%20in%20LM%20Studio%20pluginsUsing npm Dependencies | LM Studio DocsHow to use npm packages in LM Studio plugins/api/og?title=Using%20npm%20Dependencies&description=How%20to%20use…/typescript/plugins/dependencies
/docs/typescript/plugins/generatorIntroduction | LM Studio DocsWriting generators for LM Studio plugins using TypeScript/api/og?title=Introduction&from=docs/typescript/plugins/generator&d…o%20plugins%20using%20TypeScriptIntroduction | LM Studio DocsWriting generators for LM Studio plugins using TypeScript/api/og?title=Introduction&description=Writing%20generators%20for%2…ocs/typescript/plugins/generator
/docs/typescript/plugins/generator/text-only-generatorsText-only Generators | LM Studio Docs/api/og?title=Text-only%20Generators&from=docs/typescript/plugins/g…ext-only-generators&description=Text-only Generators | LM Studio Docs/api/og?title=Text-only%20Generators&description=&from=docs/typescr…s/generator/text-only-generators
/docs/typescript/plugins/generator/tool-calling-generatorsTool calling generators | LM Studio Docs/api/og?title=Tool%20calling%20generators&from=docs/typescript/plug…-calling-generators&description=Tool calling generators | LM Studio Docs/api/og?title=Tool%20calling%20generators&description=&from=docs/ty…enerator/tool-calling-generators
/docs/typescript/plugins/prompt-preprocessorIntroduction | LM Studio DocsWriting prompt preprocessors for LM Studio plugins using TypeScript/api/og?title=Introduction&from=docs/typescript/plugins/prompt-prep…o%20plugins%20using%20TypeScriptIntroduction | LM Studio DocsWriting prompt preprocessors for LM Studio plugins using TypeScript/api/og?title=Introduction&description=Writing%20prompt%20preproces…ript/plugins/prompt-preprocessor
/docs/typescript/plugins/prompt-preprocessor/custom-configurationCustom Configuration | LM Studio Docs/api/og?title=Custom%20Configuration&from=docs/typescript/plugins/p…ustom-configuration&description=Custom Configuration | LM Studio Docs/api/og?title=Custom%20Configuration&description=&from=docs/typescr…reprocessor/custom-configuration
/docs/typescript/plugins/prompt-preprocessor/custom-status-reportCustom Status Report | LM Studio Docs/api/og?title=Custom%20Status%20Report&from=docs/typescript/plugins…ustom-status-report&description=Custom Status Report | LM Studio Docs/api/og?title=Custom%20Status%20Report&description=&from=docs/types…reprocessor/custom-status-report
/docs/typescript/plugins/prompt-preprocessor/examplesExamples | LM Studio Docs/api/og?title=Examples&from=docs/typescript/plugins/prompt-preprocessor/examples&description=Examples | LM Studio Docs/api/og?title=Examples&description=&from=docs/typescript/plugins/prompt-preprocessor/examples
/docs/typescript/plugins/prompt-preprocessor/handling-abortsHandling Aborts | LM Studio Docs/api/og?title=Handling%20Aborts&from=docs/typescript/plugins/prompt…sor/handling-aborts&description=Handling Aborts | LM Studio Docs/api/og?title=Handling%20Aborts&description=&from=docs/typescript/p…mpt-preprocessor/handling-aborts
/docs/typescript/plugins/publish-pluginsSharing Plugins | LM Studio DocsHow to publish your LM Studio plugins so they can be used by others/api/og?title=Sharing%20Plugins&from=docs/typescript/plugins/publis…%20can%20be%20used%20by%20othersSharing Plugins | LM Studio DocsHow to publish your LM Studio plugins so they can be used by others/api/og?title=Sharing%20Plugins&description=How%20to%20publish%20yo…pescript/plugins/publish-plugins
/docs/typescript/plugins/tools-providerIntroduction to Tools Provider | LM Studio DocsWriting tools providers for LM Studio plugins using TypeScript/api/og?title=Introduction%20to%20Tools%20Provider&from=docs/typesc…o%20plugins%20using%20TypeScriptIntroduction to Tools Provider | LM Studio DocsWriting tools providers for LM Studio plugins using TypeScript/api/og?title=Introduction%20to%20Tools%20Provider&description=Writ…ypescript/plugins/tools-provider
/docs/typescript/plugins/tools-provider/custom-configurationCustom Configuration | LM Studio DocsAdd custom configuration options to your tools provider/api/og?title=Custom%20Configuration&from=docs/typescript/plugins/t…s%20to%20your%20tools%20providerCustom Configuration | LM Studio DocsAdd custom configuration options to your tools provider/api/og?title=Custom%20Configuration&description=Add%20custom%20con…ls-provider/custom-configuration
/docs/typescript/plugins/tools-provider/handling-abortsHandling Aborts | LM Studio DocsGracefully handle user-aborted tool executions in your tools provider/api/og?title=Handling%20Aborts&from=docs/typescript/plugins/tools-…s%20in%20your%20tools%20providerHandling Aborts | LM Studio DocsGracefully handle user-aborted tool executions in your tools provider/api/og?title=Handling%20Aborts&description=Gracefully%20handle%20u…s/tools-provider/handling-aborts
/docs/typescript/plugins/tools-provider/multiple-toolsMultiple Tools | LM Studio Docs/api/og?title=Multiple%20Tools&from=docs/typescript/plugins/tools-p…ider/multiple-tools&description=Multiple Tools | LM Studio Docs/api/og?title=Multiple%20Tools&description=&from=docs/typescript/pl…ns/tools-provider/multiple-tools
/docs/typescript/plugins/tools-provider/single-toolSingle Tool | LM Studio Docs/api/og?title=Single%20Tool&from=docs/typescript/plugins/tools-provider/single-tool&description=Single Tool | LM Studio Docs/api/og?title=Single%20Tool&description=&from=docs/typescript/plugins/tools-provider/single-tool
/docs/typescript/plugins/tools-provider/status-reports-and-warningsStatus Reports & Warnings | LM Studio Docs/api/og?title=Status%20Reports%20&%20Warnings&from=docs/typescript/…eports-and-warnings&description=Status Reports & Warnings | LM Studio Docs/api/og?title=Status%20Reports%20&%20Warnings&description=&from=doc…ider/status-reports-and-warnings
/docs/typescript/project-setupProject Setup | LM Studio DocsSet up your lmstudio-js app or script./api/og?title=Project%20Setup&from=docs/typescript/project-setup&de…lmstudio-js%20app%20or%20script.Project Setup | LM Studio DocsSet up your lmstudio-js app or script./api/og?title=Project%20Setup&description=Set%20up%20your%20lmstudi…om=docs/typescript/project-setup
/docs/typescript/tokenizationTokenization | LM Studio DocsTokenize text using a model's tokenizer/api/og?title=Tokenization&from=docs/typescript/tokenization&descri…sing%20a%20model%27s%20tokenizerTokenization | LM Studio DocsTokenize text using a model's tokenizer/api/og?title=Tokenization&description=Tokenize%20text%20using%20a%…rom=docs/typescript/tokenization
No rows found, please edit your search term.

Heading structure

Found 156 row(s).
Heading structureCountErrors 🔽URL
  • <h1> API Changelog [#api-changelog]
    • <h3> Anthropic-compatible API [#anthropic-compatible-api]
    • <h3> LM Studio native v1 REST API [#lm-studio-native-v1-rest-api]
    • <h3> OpenAI /v1/responses and variant listing [#openai-v1responses-and-variant-listing]
    • <h3> CLI: model resource estimates, status, and interrupts [#cli-model-resource-estimates-status-and-interrupts]
    • <h3> CLI log streaming: server + model [#cli-log-streaming-server--model]
    • <h3> New model support (API) [#new-model-support-api]
    • <h3> Seed‑OSS tool‑calling and template fixes [#seedoss-toolcalling-and-template-fixes]
    • <h3> Reasoning content and tool‑calling reliability [#reasoning-content-and-toolcalling-reliability]
    • <h3> Bug fixes for streaming and tool calls [#bug-fixes-for-streaming-and-tool-calls]
    • <h3> Streaming options and tool‑calling improvements [#streaming-options-and-toolcalling-improvements]
    • <h3> Tool‑calling reliability and token‑count updates [#toolcalling-reliability-and-tokencount-updates]
    • <h3> Model capabilities in GET /models [#model-capabilities-in-get-models]
    • <h3> Improved Tool Use API Support [#improved-tool-use-api-support]
    • <h3> [API/SDK] Preset Support [#apisdk-preset-support]
    • <h3> Speculative Decoding API [#speculative-decoding-api]
    • <h3> Idle TTL and Auto Evict [#idle-ttl-and-auto-evict]
    • <h3> Separate reasoning_content in Chat Completion responses [#separate-reasoningcontent-in-chat-completion-responses]
    • <h3> Tool and Function Calling API [#tool-and-function-calling-api]
    • <h3> Introducing lms get: download models from the terminal [#introducing-lms-get-download-models-from-the-terminal]
2019/docs/developer/api-changelog
  • <h1> Streaming events [#streaming-events]
    • <h3> chat.start [#chatstart]
    • <h3> model_load.start [#modelloadstart]
    • <h3> model_load.progress [#modelloadprogress]
    • <h3> model_load.end [#modelloadend]
    • <h3> prompt_processing.start [#promptprocessingstart]
    • <h3> prompt_processing.progress [#promptprocessingprogress]
    • <h3> prompt_processing.end [#promptprocessingend]
    • <h3> reasoning.start [#reasoningstart]
    • <h3> reasoning.delta [#reasoningdelta]
    • <h3> reasoning.end [#reasoningend]
    • <h3> tool_call.start [#toolcallstart]
    • <h3> tool_call.arguments [#toolcallarguments]
    • <h3> tool_call.success [#toolcallsuccess]
    • <h3> tool_call.failure [#toolcallfailure]
    • <h3> message.start [#messagestart]
    • <h3> message.delta [#messagedelta]
    • <h3> message.end [#messageend]
    • <h3> error [#error]
    • <h3> chat.end [#chatend]
2019/docs/developer/rest/streaming-events
  • <h1> lms chat [#lms-chat]
    • <h3> Flags [#flags]
    • <h3> Start an interactive chat [#start-an-interactive-chat]
    • <h3> Chat with a specific model [#chat-with-a-specific-model]
    • <h3> Send a single prompt and exit [#send-a-single-prompt-and-exit]
    • <h3> Set a system prompt [#set-a-system-prompt]
    • <h3> Keep the model loaded after chatting [#keep-the-model-loaded-after-chatting]
    • <h3> Pipe input from another command [#pipe-input-from-another-command]
87/docs/cli/local-models/chat
  • <h1> Manage chats [#manage-chats]
    • <h3> Create a new chat [#create-a-new-chat]
    • <h3> Create a folder [#create-a-folder]
    • <h3> Drag and drop [#drag-and-drop]
    • <h3> Duplicate chats [#duplicate-chats]
    • <h3> Split view in chat [#split-view-in-chat]
    • <h2> FAQ [#faq]
    • <h2> Conversations folder filesystem path [#conversations-folder-filesystem-path]
      • <h3> Community [#community]
95/docs/app/basics/chat
  • <h1> Configuring the Model [#configuring-the-model]
  • <h1> Inference Parameters [#inference-parameters]
  • <h1> Load Parameters [#load-parameters]
    • <h3> Set Load Parameters with .model() [#set-load-parameters-with-model]
    • <h3> Set Load Parameters with .load() [#set-load-parameters-with-load]
55/docs/typescript/llm-prediction/parameters
  • <h1> Configuring the Model [#configuring-the-model]
  • <h1> Inference Parameters [#inference-parameters]
  • <h1> Load Parameters [#load-parameters]
    • <h3> Set Load Parameters with .model() [#set-load-parameters-with-model]
    • <h3> Set Load Parameters with .load_new_instance() [#set-load-parameters-with-loadnewinstance]
55/docs/python/llm-prediction/parameters
  • <h1> lms import [#lms-import]
    • <h3> Flags [#flags]
    • <h3> Import a model file [#import-a-model-file]
    • <h3> Keep the original file [#keep-the-original-file]
    • <h3> Pick the target folder yourself [#pick-the-target-folder-yourself]
    • <h3> Dry run before importing [#dry-run-before-importing]
65/docs/cli/local-models/import
  • <h1> Download an LLM [#download-an-llm]
    • <h3> Searching for models [#searching-for-models]
    • <h3> Which download option to choose? [#which-download-option-to-choose]
    • <h3> Changing the models directory [#changing-the-models-directory]
    • <h3> Community [#community]
54/docs/app/basics/download-model
  • <h1> Claude Code [#claude-code]
    • <h3> 1) Start LM Studio's local server [#1-start-lm-studios-local-server]
    • <h3> 2) Configure Claude Code [#2-configure-claude-code]
    • <h3> 3) Run Claude Code against a local model [#3-run-claude-code-against-a-local-model]
    • <h3> 4) If Require Authentication is enabled, use your LM Studio API token [#4-if-require-authentication-is-enabled-use-your-lm-studio-api-token]
54/docs/integrations/claude-code
  • <h1> Structured Output [#structured-output]
    • <h3> Start LM Studio as a server [#start-lm-studio-as-a-server]
    • <h3> Structured Output [#structured-output]
    • <h3> Structured output engine [#structured-output-engine]
    • <h3> Community [#community]
54/docs/developer/openai-compat/structured-output
  • <h1> lms log stream [#lms-log-stream]
    • <h3> Flags [#flags]
    • <h3> Quick start [#quick-start]
    • <h3> Filter model logs [#filter-model-logs]
    • <h3> JSON output and stats [#json-output-and-stats]
54/docs/cli/serve/log-stream
  • <h1> lms runtime [#lms-runtime]
    • <h3> Commands [#commands]
    • <h3> List installed runtimes [#list-installed-runtimes]
    • <h3> Download a runtime [#download-a-runtime]
    • <h3> Switch to a runtime [#switch-to-a-runtime]
54/docs/cli/runtime/runtime
  • <h1> Prompt Template [#prompt-template]
    • <h3> Overriding the Prompt Template for a Specific Model [#overriding-the-prompt-template-for-a-specific-model]
    • <h3> Customize the Prompt Template [#customize-the-prompt-template]
    • <h3> Prompt template options [#prompt-template-options]
43/docs/app/advanced/prompt-template
  • <h1> Use MCP Servers [#use-mcp-servers]
    • <h3> Be cautious [#be-cautious]
  • <h1> Use MCP servers in LM Studio [#use-mcp-servers-in-lm-studio]
    • <h2> Install new servers: mcp.json [#install-new-servers-mcpjson]
      • <h3> Example MCP to try: Hugging Face MCP Server [#example-mcp-to-try-hugging-face-mcp-server]
    • <h2> Gotchas and Troubleshooting [#gotchas-and-troubleshooting]
63/docs/app/mcp
  • <h1> Parallel Requests [#parallel-requests]
    • <h3> Parallel Requests via Continuous Batching [#parallel-requests-via-continuous-batching]
    • <h3> Setting Max Concurrent Predictions [#setting-max-concurrent-predictions]
    • <h3> Sending parallel requests to chats in Split View [#sending-parallel-requests-to-chats-in-split-view]
43/docs/app/advanced/parallel-requests
  • <h1> Import Models [#import-models]
    • <h3> Use lms import (experimental) [#use-lms-import-experimental]
    • <h3> LM Studio's expected models directory structure [#lm-studios-expected-models-directory-structure]
    • <h3> Community [#community]
43/docs/app/advanced/import-model
  • <h1> Authentication [#authentication]
    • <h3> Require Authentication for each request [#require-authentication-for-each-request]
    • <h3> Creating API Tokens [#creating-api-tokens]
    • <h3> Configuring API Token Permissions [#configuring-api-token-permissions]
    • <h2> API Token Usage [#api-token-usage]
      • <h3> Using API Tokens with REST API: [#using-api-tokens-with-rest-api]
      • <h3> Using API Tokens with Python SDK [#using-api-tokens-with-python-sdk]
      • <h3> Using API Tokens with TypeScript SDK [#using-api-tokens-with-typescript-sdk]
83/docs/developer/core/authentication
  • <h1> lms push (Beta) [#lms-push-beta]
    • <h3> Publish the current folder [#publish-the-current-folder]
    • <h3> Flags [#flags]
    • <h3> Advanced [#advanced]
43/docs/cli/develop-and-publish/push
  • <h1> lms dev (Beta) [#lms-dev-beta]
    • <h3> Run the dev plugin server [#run-the-dev-plugin-server]
    • <h3> Install the plugin instead of running dev [#install-the-plugin-instead-of-running-dev]
    • <h3> Flags [#flags]
43/docs/cli/develop-and-publish/dev
  • <h1> lms login [#lms-login]
    • <h3> Sign in with the browser [#sign-in-with-the-browser]
    • <h3> "CI style" login with pre-authenticated keys [#ci-style-login-with-pre-authenticated-keys]
    • <h3> Advanced Flags [#advanced-flags]
43/docs/cli/develop-and-publish/login
  • <h1> lms clone [#lms-clone]
    • <h3> Flags [#flags]
    • <h3> Clone the latest revision [#clone-the-latest-revision]
    • <h3> Clone into a specific directory [#clone-into-a-specific-directory]
43/docs/cli/develop-and-publish/clone
  • <h1> Sharing Plugins [#sharing-plugins]
    • <h3> Changing Plugin Names [#changing-plugin-names]
    • <h3> Publishing Plugins to an Organization [#publishing-plugins-to-an-organization]
    • <h3> Private Plugins [#private-plugins]
43/docs/typescript/plugins/publish-plugins
  • <h1> User or Developer [#user-or-developer]
    • <h3> Enable Developer Mode [#enable-developer-mode]
    • <h3> Which mode should I choose? [#which-mode-should-i-choose]
32/docs/app/user-interface/modes
  • <h1> Add to LM Studio Button [#add-to-lm-studio-button]
  • <h1> Generate your own MCP install link [#generate-your-own-mcp-install-link]
    • <h2> Try an example [#try-an-example]
      • <h3> Deeplink format [#deeplink-format]
42/docs/app/mcp/deeplink
  • <h1> lmstudio-python (Python SDK) [#lmstudio-python-python-sdk]
    • <h2> Installing the SDK [#installing-the-sdk]
    • <h2> Features [#features]
    • <h2> Quick Example: Chat with a Llama Model [#quick-example-chat-with-a-llama-model]
      • <h3> Getting Local Models [#getting-local-models]
  • <h1> Interactive Convenience, Deterministic Resource Management, or Structured Concurrency? [#interactive-convenience-deterministic-resource-management-or-structured-concurrency]
    • <h2> Timeouts in the synchronous API [#timeouts-in-the-synchronous-api]
    • <h2> Timeouts in the asynchronous API [#timeouts-in-the-asynchronous-api]
82/docs/python
  • <h1> Offline Operation [#offline-operation]
    • <h3> Operations that do NOT require connectivity [#operations-that-do-not-require-connectivity]
    • <h3> Operations that require connectivity [#operations-that-require-connectivity]
32/docs/app/offline
  • <h1> Publish a model.yaml [#publish-a-modelyaml]
  • <h1> Quickstart [#quickstart]
    • <h2> Change the publisher to your user [#change-the-publisher-to-your-user]
    • <h2> Sign in [#sign-in]
    • <h2> Publish your model [#publish-your-model]
      • <h3> Override metadata at publish time [#override-metadata-at-publish-time]
    • <h2> Downloading a model and using it in LM Studio [#downloading-a-model-and-using-it-in-lm-studio]
72/docs/app/modelyaml/publish
  • <h1> Importing and Sharing [#importing-and-sharing]
  • <h1> Import Presets [#import-presets]
    • <h2> Import Presets from File [#import-presets-from-file]
    • <h2> Import Presets from URL [#import-presets-from-url]
      • <h3> Using lms CLI [#using-lms-cli]
      • <h3> Find your config-presets directory [#find-your-config-presets-directory]
      • <h3> Where Hub shared presets are stored [#where-hub-shared-presets-are-stored]
72/docs/app/presets/import
  • <h1> Codex [#codex]
    • <h3> 1) Start LM Studio's local server [#1-start-lm-studios-local-server]
    • <h3> 2) Run Codex against a local model [#2-run-codex-against-a-local-model]
32/docs/integrations/codex
  • <h1> LM Studio as a Local LLM API Server [#lm-studio-as-a-local-llm-api-server]
    • <h3> Running the server [#running-the-server]
    • <h3> API options [#api-options]
32/docs/developer/core/server
  • <h1> Examples [#examples]
    • <h3> Example: Inject Current Time [#example-inject-current-time]
    • <h3> Example: Replace Trigger Words [#example-replace-trigger-words]
32/docs/typescript/plugins/prompt-preprocessor/examples
  • <h1> Per-model Defaults [#per-model-defaults]
    • <h3> Setting default parameters for a model [#setting-default-parameters-for-a-model]
    • <h2> Advanced Topics [#advanced-topics]
      • <h3> Changing load settings before loading a model [#changing-load-settings-before-loading-a-model]
      • <h3> Saving your changes as the default settings for a model [#saving-your-changes-as-the-default-settings-for-a-model]
      • <h3> Community [#community]
61/docs/app/advanced/per-model
  • <h1> LM Studio in your language [#lm-studio-in-your-language]
    • <h3> Selecting a Language [#selecting-a-language]
21/docs/app/user-interface/languages
  • <h1> Color Themes [#color-themes]
    • <h3> Selecting a Theme [#selecting-a-theme]
21/docs/app/user-interface/themes
  • <h1> Embedding [#embedding]
    • <h3> Prerequisite: Get an Embedding Model [#prerequisite-get-an-embedding-model]
    • <h2> Create Embeddings [#create-embeddings]
31/docs/typescript/embedding
  • <h1> Image Input [#image-input]
    • <h3> Prerequisite: Get a VLM (Vision-Language Model) [#prerequisite-get-a-vlm-vision-language-model]
    • <h2> 1. Instantiate the Model [#1-instantiate-the-model]
    • <h2> 2. Prepare the Image [#2-prepare-the-image]
    • <h2> 3. Pass the Image to the Model in .respond() [#3-pass-the-image-to-the-model-in-respond]
51/docs/typescript/llm-prediction/image-input
  • <h1> LLMPredictionConfigInput [#llmpredictionconfiginput]
    • <h3> Fields [#fields]
21/docs/typescript/api-reference/llm-prediction-config-input
  • <h1> LLMLoadModelConfig [#llmloadmodelconfig]
    • <h3> Parameters [#parameters]
21/docs/typescript/api-reference/llm-load-model-config
  • <h1> Embedding [#embedding]
    • <h3> Prerequisite: Get an Embedding Model [#prerequisite-get-an-embedding-model]
    • <h2> Create Embeddings [#create-embeddings]
31/docs/python/embedding
  • <h1> Image Input [#image-input]
    • <h3> Prerequisite: Get a VLM (Vision-Language Model) [#prerequisite-get-a-vlm-vision-language-model]
    • <h2> 1. Instantiate the Model [#1-instantiate-the-model]
    • <h2> 2. Prepare the Image [#2-prepare-the-image]
    • <h2> 3. Pass the Image to the Model in .respond() [#3-pass-the-image-to-the-model-in-respond]
51/docs/python/llm-prediction/image-input
  • <h1> Anthropic Compatibility Endpoints [#anthropic-compatibility-endpoints]
    • <h3> Supported endpoints [#supported-endpoints]
    • <h2> Using Claude Code with LM Studio [#using-claude-code-with-lm-studio]
    • <h2> Authentication headers [#authentication-headers]
    • <h2> Set the base URL to point to LM Studio [#set-the-base-url-to-point-to-lm-studio]
      • <h3> cURL example [#curl-example]
      • <h3> Python example [#python-example]
71/docs/developer/anthropic-compat
  • <h1> REST API v0 [#rest-api-v0]
    • <h3> Start the REST API server [#start-the-rest-api-server]
    • <h2> Endpoints [#endpoints]
      • <h3> GET /api/v0/models [#get-apiv0models]
      • <h3> GET /api/v0/models/{model} [#get-apiv0modelsmodel]
      • <h3> POST /api/v0/chat/completions [#post-apiv0chatcompletions]
      • <h3> POST /api/v0/completions [#post-apiv0completions]
      • <h3> POST /api/v0/embeddings [#post-apiv0embeddings]
81/docs/developer/rest/endpoints
  • <h1> OpenAI Compatibility Endpoints [#openai-compatibility-endpoints]
    • <h3> Supported endpoints [#supported-endpoints]
    • <h2> Set the base url to point to LM Studio [#set-the-base-url-to-point-to-lm-studio]
      • <h3> Python Example [#python-example]
      • <h3> Typescript Example [#typescript-example]
      • <h3> cURL Example [#curl-example]
    • <h2> Using Codex with LM Studio [#using-codex-with-lm-studio]
71/docs/developer/openai-compat
  • <h1> Chat Completions [#chat-completions]
    • <h3> Supported payload parameters [#supported-payload-parameters]
21/docs/developer/openai-compat/chat-completions
  • <h1> lms link status [#lms-link-status]
    • <h3> Flags [#flags]
    • <h2> Check status [#check-status]
      • <h3> JSON output [#json-output]
      • <h3> Enable or disable LM Link [#enable-or-disable-lm-link]
      • <h3> Learn more [#learn-more]
61/docs/cli/link/link-status
  • <h1> lms daemon status [#lms-daemon-status]
    • <h3> Flags [#flags]
    • <h2> Check daemon status [#check-daemon-status]
      • <h3> JSON output [#json-output]
      • <h3> Start or stop the daemon [#start-or-stop-the-daemon]
51/docs/cli/daemon/daemon-status
  • <h1> lms daemon up [#lms-daemon-up]
    • <h3> Flags [#flags]
    • <h2> Start the daemon [#start-the-daemon]
      • <h3> JSON output [#json-output]
      • <h3> Check the daemon status [#check-the-daemon-status]
      • <h3> Learn more [#learn-more]
61/docs/cli/daemon/daemon-up
  • <h1> lms daemon update [#lms-daemon-update]
    • <h3> Flags [#flags]
    • <h2> Update the daemon [#update-the-daemon]
      • <h3> Update to the beta channel [#update-to-the-beta-channel]
      • <h3> After updating [#after-updating]
51/docs/cli/daemon/daemon-update
  • <h1> lms load [#lms-load]
    • <h3> Flags [#flags]
    • <h2> Load a model [#load-a-model]
      • <h3> Set a custom identifier [#set-a-custom-identifier]
      • <h3> Set context length [#set-context-length]
      • <h3> Set GPU offload [#set-gpu-offload]
      • <h3> Set TTL [#set-ttl]
      • <h3> Estimate resources without loading [#estimate-resources-without-loading]
    • <h2> Unload models [#unload-models]
      • <h3> Flags [#flags]
      • <h3> Unload a specific model [#unload-a-specific-model]
      • <h3> Unload all models [#unload-all-models]
      • <h3> Unload from a remote LM Studio instance [#unload-from-a-remote-lm-studio-instance]
    • <h2> Operate on a remote LM Studio instance [#operate-on-a-remote-lm-studio-instance]
141/docs/cli/local-models/load
  • <h1> lms server start [#lms-server-start]
    • <h3> Flags [#flags]
    • <h2> Start the server [#start-the-server]
      • <h3> Specify a custom port [#specify-a-custom-port]
      • <h3> Enable CORS support [#enable-cors-support]
      • <h3> Check the server status [#check-the-server-status]
61/docs/cli/serve/server-start
  • <h1> lms get [#lms-get]
    • <h3> Flags [#flags]
    • <h2> Download a model [#download-a-model]
      • <h3> Specify quantization [#specify-quantization]
      • <h3> Filter by format [#filter-by-format]
      • <h3> Control search results [#control-search-results]
61/docs/cli/local-models/get
  • <h1> lms server status [#lms-server-status]
    • <h3> Flags [#flags]
    • <h2> Check server status [#check-server-status]
      • <h3> Example usage [#example-usage]
      • <h3> JSON output [#json-output]
      • <h3> Control logging output [#control-logging-output]
61/docs/cli/serve/server-status
  • <h1> lms ls [#lms-ls]
    • <h3> Flags [#flags]
    • <h2> List all models [#list-all-models]
      • <h3> Filter by model type [#filter-by-model-type]
      • <h3> Additional output formats [#additional-output-formats]
    • <h2> Operate on a remote LM Studio instance [#operate-on-a-remote-lm-studio-instance]
61/docs/cli/local-models/ls
  • <h1> lms daemon down [#lms-daemon-down]
    • <h3> Learn more [#learn-more]
21/docs/cli/daemon/daemon-down
  • <h1> Introduction [#introduction]
    • <h3> Examples [#examples]
21/docs/typescript/plugins/prompt-preprocessor
  • <h1> Server Settings [#server-settings]
    • <h3> Settings information [#settings-information]
21/docs/developer/core/server/settings
  • <h1> Welcome to LM Studio Docs! [#welcome-to-lm-studio-docs]
    • <h2> What can I do with LM Studio? [#what-can-i-do-with-lm-studio]
    • <h2> System requirements [#system-requirements]
    • <h2> Run llama.cpp (GGUF) or MLX models [#run-llamacpp-gguf-or-mlx-models]
    • <h2> LM Studio as an MCP client [#lm-studio-as-an-mcp-client]
    • <h2> Run an LLM like gpt-oss, Llama, Qwen, Mistral, or DeepSeek R1 on your computer [#run-an-llm-like-gpt-oss-llama-qwen-mistral-or-deepseek-r1-on-your-computer]
    • <h2> Chat with documents entirely offline on your computer [#chat-with-documents-entirely-offline-on-your-computer]
    • <h2> Run LM Studio without the GUI (llmster) [#run-lm-studio-without-the-gui-llmster]
    • <h2> Use LM Studio's API from your own apps and scripts [#use-lm-studios-api-from-your-own-apps-and-scripts]
    • <h2> Community [#community]
100/docs/app
  • <h1> Get started with LM Studio [#get-started-with-lm-studio]
    • <h2> Getting up and running [#getting-up-and-running]
      • <h3> 1. Download an LLM to your computer [#1-download-an-llm-to-your-computer]
      • <h3> 2. Load a model to memory [#2-load-a-model-to-memory]
      • <h3> 3. Chat! [#3-chat]
      • <h3> Community [#community]
60/docs/app/basics
  • <h1> Integrations [#integrations]
    • <h2> Available Integrations [#available-integrations]
20/docs/integrations
  • <h1> Push New Revisions [#push-new-revisions]
    • <h2> Published Presets [#published-presets]
    • <h2> Step 1: Make Changes and Commit [#step-1-make-changes-and-commit]
    • <h2> Step 2: Click the Push Button [#step-2-click-the-push-button]
40/docs/app/presets/push
  • <h1> Chat with Documents [#chat-with-documents]
    • <h2> Terminology [#terminology]
    • <h2> RAG vs. Full document 'in context' [#rag-vs-full-document-in-context]
    • <h2> Tip for successful RAG [#tip-for-successful-rag]
40/docs/app/basics/rag
  • <h1> System Requirements [#system-requirements]
    • <h2> macOS [#macos]
    • <h2> Windows [#windows]
    • <h2> Linux [#linux]
40/docs/app/system-requirements
  • <h1> Publish Your Presets [#publish-your-presets]
    • <h2> Step 1: Click the Publish Button [#step-1-click-the-publish-button]
    • <h2> Step 2: Set the Preset Details [#step-2-set-the-preset-details]
30/docs/app/presets/publish
  • <h1> Pull Updates [#pull-updates]
    • <h2> How to Pull Updates [#how-to-pull-updates]
    • <h2> Your Presets vs Others' [#your-presets-vs-others]
30/docs/app/presets/pull
  • <h1> lmstudio-js (TypeScript SDK) [#lmstudio-js-typescript-sdk]
    • <h2> Installing the SDK [#installing-the-sdk]
    • <h2> Features [#features]
    • <h2> Quick Example: Chat with a Llama Model [#quick-example-chat-with-a-llama-model]
      • <h3> Getting Local Models [#getting-local-models]
50/docs/typescript
  • <h1> Speculative Decoding [#speculative-decoding]
    • <h2> What is Speculative Decoding [#what-is-speculative-decoding]
    • <h2> How to enable Speculative Decoding [#how-to-enable-speculative-decoding]
      • <h3> Finding compatible draft models [#finding-compatible-draft-models]
    • <h2> Key factors affecting performance [#key-factors-affecting-performance]
      • <h3> An important trade-off [#an-important-trade-off]
      • <h3> Prompt dependent [#prompt-dependent]
70/docs/app/advanced/speculative-decoding
  • <h1> LM Studio Developer Docs [#lm-studio-developer-docs]
    • <h2> Get to know the stack [#get-to-know-the-stack]
    • <h2> What you can build [#what-you-can-build]
    • <h2> Install llmster for headless deployments [#install-llmster-for-headless-deployments]
    • <h2> Super quick start [#super-quick-start]
      • <h3> TypeScript (lmstudio-js) [#typescript-lmstudio-js]
      • <h3> Python (lmstudio-python) [#python-lmstudio-python]
      • <h3> HTTP (LM Studio REST API) [#http-lm-studio-rest-api]
    • <h2> Helpful links [#helpful-links]
90/docs/developer
  • <h1> lms — LM Studio's CLI [#lms--lm-studios-cli]
    • <h2> Install lms [#install-lms]
    • <h2> Open source [#open-source]
    • <h2> Command quick links [#command-quick-links]
      • <h3> Verify the installation [#verify-the-installation]
    • <h2> Use lms to automate and debug your workflows [#use-lms-to-automate-and-debug-your-workflows]
      • <h3> Start and stop the local server [#start-and-stop-the-local-server]
      • <h3> List the local models on the machine [#list-the-local-models-on-the-machine]
      • <h3> List the currently loaded models [#list-the-currently-loaded-models]
      • <h3> Load a model (with options) [#load-a-model-with-options]
      • <h3> Unload a model [#unload-a-model]
110/docs/cli
  • <h1> Introduction to model.yaml [#introduction-to-modelyaml]
    • <h2> Core fields [#core-fields]
      • <h3> model [#model]
      • <h3> base [#base]
      • <h3> metadataOverrides [#metadataoverrides]
      • <h3> config [#config]
      • <h3> customFields [#customfields]
    • <h2> Complete example [#complete-example]
80/docs/app/modelyaml
  • <h1> LM Link [#lm-link]
    • <h2> What can I do with LM Link? [#what-can-i-do-with-lm-link]
    • <h2> Use Cases [#use-cases]
    • <h2> Use LM Link with [#use-lm-link-with]
40/docs/lmlink
  • <h1> Config Presets [#config-presets]
    • <h2> Saving, resetting, and deselecting Presets [#saving-resetting-and-deselecting-presets]
    • <h2> Importing, Publishing, and Updating Downloaded Presets [#importing-publishing-and-updating-downloaded-presets]
    • <h2> Example: Build your own Prompt Library [#example-build-your-own-prompt-library]
    • <h2> Where Presets are stored [#where-presets-are-stored]
      • <h3> Migration from LM Studio 0.2.* Presets [#migration-from-lm-studio-02-presets]
      • <h3> Community [#community]
70/docs/app/presets
  • <h1> Run LM Studio as a service (headless) [#run-lm-studio-as-a-service-headless]
    • <h2> Option 1: llmster (recommended) [#option-1-llmster-recommended]
      • <h3> Install llmster [#install-llmster]
      • <h3> Start llmster [#start-llmster]
    • <h2> Option 2: Desktop app in headless mode [#option-2-desktop-app-in-headless-mode]
      • <h3> Run the LLM service on machine login [#run-the-llm-service-on-machine-login]
      • <h3> Auto Server Start [#auto-server-start]
    • <h2> Just-In-Time (JIT) model loading for REST endpoints [#just-in-time-jit-model-loading-for-rest-endpoints]
      • <h3> Community [#community]
90/docs/developer/core/headless
  • <h1> Using LM Link with Integrations [#using-lm-link-with-integrations]
    • <h2> Use your integration as normal [#use-your-integration-as-normal]
      • <h3> Claude Code [#claude-code]
      • <h3> Codex [#codex]
    • <h2> Set a preferred device [#set-a-preferred-device]
50/docs/integrations/lmlink
  • <h1> Chat Completions [#chat-completions]
    • <h2> Quick Example: Generate a Chat Response [#quick-example-generate-a-chat-response]
    • <h2> Obtain a Model [#obtain-a-model]
    • <h2> Manage Chat Context [#manage-chat-context]
    • <h2> Generate a response [#generate-a-response]
    • <h2> Customize Inferencing Parameters [#customize-inferencing-parameters]
    • <h2> Print prediction stats [#print-prediction-stats]
    • <h2> Example: Multi-turn Chat [#example-multi-turn-chat]
80/docs/typescript/llm-prediction/chat-completion
  • <h1> Authentication [#authentication]
    • <h2> Providing the API Token [#providing-the-api-token]
20/docs/typescript/authentication
  • <h1> Tokenization [#tokenization]
    • <h2> Tokenize [#tokenize]
    • <h2> Count tokens [#count-tokens]
      • <h3> Example: Count Context [#example-count-context]
40/docs/typescript/tokenization
  • <h1> Get Context Length [#get-context-length]
    • <h2> Use the getContextLength() Function on the Model Object [#use-the-getcontextlength-function-on-the-model-object]
      • <h3> Example: Check if the input will fit in the model's context window [#example-check-if-the-input-will-fit-in-the-models-context-window]
30/docs/typescript/model-info/get-context-length
  • <h1> Working with Chats [#working-with-chats]
    • <h2> Option 1: Array of Messages [#option-1-array-of-messages]
    • <h2> Option 2: Input a Single String [#option-2-input-a-single-string]
    • <h2> Option 3: Using the Chat Helper Class [#option-3-using-the-chat-helper-class]
40/docs/typescript/llm-prediction/working-with-chats
  • <h1> The .act() call [#the-act-call]
    • <h2> Automatic tool calling [#automatic-tool-calling]
      • <h3> Quick Example [#quick-example]
      • <h3> What does it mean for an LLM to "use a tool"? [#what-does-it-mean-for-an-llm-to-use-a-tool]
      • <h3> Important: Model Selection [#important-model-selection]
      • <h3> Example: Multiple Tools [#example-multiple-tools]
      • <h3> Example: Chat Loop with Create File Tool [#example-chat-loop-with-create-file-tool]
70/docs/typescript/agent/act
  • <h1> Using npm Dependencies [#using-npm-dependencies]
    • <h2> Add dependencies to your plugin with npm [#add-dependencies-to-your-plugin-with-npm]
      • <h3> postinstall scripts [#postinstall-scripts]
    • <h2> Using Other Package Managers [#using-other-package-managers]
40/docs/typescript/plugins/dependencies
  • <h1> List Loaded Models [#list-loaded-models]
    • <h2> List Models Currently Loaded in Memory [#list-models-currently-loaded-in-memory]
20/docs/typescript/manage-models/list-loaded
  • <h1> Cancelling Predictions [#cancelling-predictions]
    • <h2> 1. Call .cancel() on the prediction [#1-call-cancel-on-the-prediction]
    • <h2> 2. Use an AbortController [#2-use-an-abortcontroller]
30/docs/typescript/llm-prediction/cancelling-predictions
  • <h1> Structured Response [#structured-response]
    • <h2> Enforce Using a zod Schema [#enforce-using-a-zod-schema]
    • <h2> Enforce Using a JSON Schema [#enforce-using-a-json-schema]
30/docs/typescript/llm-prediction/structured-response
  • <h1> Manage Models in Memory [#manage-models-in-memory]
    • <h2> Get the Current Model with .model() [#get-the-current-model-with-model]
    • <h2> Get a Specific Model with .model("model-key") [#get-a-specific-model-with-modelmodel-key]
    • <h2> Load a New Instance of a Model with .load() [#load-a-new-instance-of-a-model-with-load]
      • <h3> Note about Instance Identifiers [#note-about-instance-identifiers]
    • <h2> Unload a Model from Memory with .unload() [#unload-a-model-from-memory-with-unload]
    • <h2> Set Custom Load Config Parameters [#set-custom-load-config-parameters]
    • <h2> Set an Auto Unload Timer (TTL) [#set-an-auto-unload-timer-ttl]
80/docs/typescript/manage-models/loading
  • <h1> Speculative Decoding [#speculative-decoding]
10/docs/typescript/llm-prediction/speculative-decoding
  • <h1> Introduction to Plugins [#introduction-to-plugins]
    • <h2> Getting Started [#getting-started]
      • <h3> Create a new plugin [#create-a-new-plugin]
      • <h3> Run a plugin in development mode [#run-a-plugin-in-development-mode]
    • <h2> Next Steps [#next-steps]
50/docs/typescript/plugins
  • <h1> Tool Definition [#tool-definition]
    • <h2> Anatomy of a Tool [#anatomy-of-a-tool]
    • <h2> Tools with External Effects (like Computer Use or API Calls) [#tools-with-external-effects-like-computer-use-or-api-calls]
    • <h2> Example: createFileTool [#example-createfiletool]
      • <h3> Tool Definition [#tool-definition]
      • <h3> Example code using the createFile tool: [#example-code-using-the-createfile-tool]
60/docs/typescript/agent/tools
  • <h1> List Local Models [#list-local-models]
    • <h2> Available Model on the Local Machine [#available-model-on-the-local-machine]
      • <h3> Example output: [#example-output]
30/docs/typescript/manage-models/list-downloaded
  • <h1> Get Model Info [#get-model-info]
10/docs/typescript/model-info/get-model-info
  • <h1> Generate Completions [#generate-completions]
    • <h2> 1. Instantiate a Model [#1-instantiate-a-model]
    • <h2> 2. Generate a Completion [#2-generate-a-completion]
    • <h2> 3. Print Prediction Stats [#3-print-prediction-stats]
    • <h2> Example: Get an LLM to Simulate a Terminal [#example-get-an-llm-to-simulate-a-terminal]
50/docs/typescript/llm-prediction/completion
  • <h1> Project Setup [#project-setup]
    • <h2> Creating a New node Project [#creating-a-new-node-project]
    • <h2> Add lmstudio-js to an Exiting Project [#add-lmstudio-js-to-an-exiting-project]
30/docs/typescript/project-setup
  • <h1> List Loaded Models [#list-loaded-models]
    • <h2> List Models Currently Loaded in Memory [#list-models-currently-loaded-in-memory]
20/docs/python/manage-models/list-loaded
  • <h1> Authentication [#authentication]
    • <h2> Providing the API Token [#providing-the-api-token]
20/docs/python/getting-started/authentication
  • <h1> Chat Completions [#chat-completions]
    • <h2> Quick Example: Generate a Chat Response [#quick-example-generate-a-chat-response]
    • <h2> Streaming a Chat Response [#streaming-a-chat-response]
    • <h2> Cancelling a Chat Response [#cancelling-a-chat-response]
    • <h2> Obtain a Model [#obtain-a-model]
    • <h2> Manage Chat Context [#manage-chat-context]
    • <h2> Generate a response [#generate-a-response]
    • <h2> Customize Inferencing Parameters [#customize-inferencing-parameters]
    • <h2> Print prediction stats [#print-prediction-stats]
    • <h2> Example: Multi-turn Chat [#example-multi-turn-chat]
      • <h3> Progress Callbacks [#progress-callbacks]
110/docs/python/llm-prediction/chat-completion
  • <h1> Get Load Config [#get-load-config]
10/docs/python/model-info/get-load-config
  • <h1> Using lmstudio-python in REPL [#using-lmstudio-python-in-repl]
10/docs/python/getting-started/repl
  • <h1> Project Setup [#project-setup]
    • <h2> Installing lmstudio-python [#installing-lmstudio-python]
    • <h2> Customizing the server API host and TCP port [#customizing-the-server-api-host-and-tcp-port]
      • <h3> Checking a specified API server host is running [#checking-a-specified-api-server-host-is-running]
      • <h3> Determining the default local API server port [#determining-the-default-local-api-server-port]
50/docs/python/getting-started/project-setup
  • <h1> Text Completions [#text-completions]
    • <h2> 1. Instantiate a Model [#1-instantiate-a-model]
    • <h2> 2. Generate a Completion [#2-generate-a-completion]
    • <h2> 3. Print Prediction Stats [#3-print-prediction-stats]
    • <h2> Example: Get an LLM to Simulate a Terminal [#example-get-an-llm-to-simulate-a-terminal]
    • <h2> Customize Inferencing Parameters [#customize-inferencing-parameters]
      • <h3> Progress Callbacks [#progress-callbacks]
70/docs/python/llm-prediction/completion
  • <h1> List Downloaded Models [#list-downloaded-models]
    • <h2> Available Models on the LM Studio Server [#available-models-on-the-lm-studio-server]
      • <h3> Example output: [#example-output]
30/docs/python/manage-models/list-downloaded
  • <h1> Tokenization [#tokenization]
    • <h2> Tokenize [#tokenize]
    • <h2> Count tokens [#count-tokens]
      • <h3> Example: count context [#example-count-context]
40/docs/python/tokenization
  • <h1> Speculative Decoding [#speculative-decoding]
10/docs/python/llm-prediction/speculative-decoding
  • <h1> The .act() call [#the-act-call]
    • <h2> Automatic tool calling [#automatic-tool-calling]
      • <h3> Quick Example [#quick-example]
      • <h3> What does it mean for an LLM to "use a tool"? [#what-does-it-mean-for-an-llm-to-use-a-tool]
      • <h3> Running multiple tool calls in parallel [#running-multiple-tool-calls-in-parallel]
      • <h3> Important: Model Selection [#important-model-selection]
      • <h3> Example: Multiple Tools [#example-multiple-tools]
      • <h3> Example: Chat Loop with Create File Tool [#example-chat-loop-with-create-file-tool]
      • <h3> Progress Callbacks [#progress-callbacks]
90/docs/python/agent/act
  • <h1> Manage Models in Memory [#manage-models-in-memory]
    • <h2> Get the Current Model with .model() [#get-the-current-model-with-model]
    • <h2> Get a Specific Model with .model("model-key") [#get-a-specific-model-with-modelmodel-key]
    • <h2> Load a New Instance of a Model with .load_new_instance() [#load-a-new-instance-of-a-model-with-loadnewinstance]
      • <h3> Note about Instance Identifiers [#note-about-instance-identifiers]
    • <h2> Unload a Model from Memory with .unload() [#unload-a-model-from-memory-with-unload]
    • <h2> Set Custom Load Config Parameters [#set-custom-load-config-parameters]
    • <h2> Set an Auto Unload Timer (TTL) [#set-an-auto-unload-timer-ttl]
80/docs/python/manage-models/loading
  • <h1> Cancelling Predictions [#cancelling-predictions]
10/docs/python/llm-prediction/cancelling-predictions
  • <h1> Get Context Length [#get-context-length]
    • <h2> Use the get_context_length() function on the model object [#use-the-getcontextlength-function-on-the-model-object]
      • <h3> Example: Check if the input will fit in the model's context window [#example-check-if-the-input-will-fit-in-the-models-context-window]
30/docs/python/model-info/get-context-length
  • <h1> Tool Definition [#tool-definition]
    • <h2> Anatomy of a Tool [#anatomy-of-a-tool]
    • <h2> Tools with External Effects (like Computer Use or API Calls) [#tools-with-external-effects-like-computer-use-or-api-calls]
    • <h2> Example: create_file_tool [#example-createfiletool]
      • <h3> Tool Definition [#tool-definition]
      • <h3> Example code using the create_file tool: [#example-code-using-the-createfile-tool]
    • <h2> Handling tool calling errors [#handling-tool-calling-errors]
70/docs/python/agent/tools
  • <h1> Get Model Info [#get-model-info]
    • <h2> Example output [#example-output]
20/docs/python/model-info/get-model-info
  • <h1> Working with Chats [#working-with-chats]
    • <h2> Option 1: Input a Single String [#option-1-input-a-single-string]
    • <h2> Option 2: Using the Chat Helper Class [#option-2-using-the-chat-helper-class]
    • <h2> Option 3: Providing Chat History Data Directly [#option-3-providing-chat-history-data-directly]
40/docs/python/llm-prediction/working-with-chats
  • <h1> Structured Response [#structured-response]
    • <h2> Enforce Using a Class Based Schema Definition [#enforce-using-a-class-based-schema-definition]
    • <h2> Enforce Using a JSON Schema [#enforce-using-a-json-schema]
30/docs/python/llm-prediction/structured-response
  • <h1> List Models [#list-models]
10/docs/developer/openai-compat/models
  • <h1> Chat with a model [#chat-with-a-model]
10/docs/developer/rest/chat
  • <h1> Load a model [#load-a-model]
10/docs/developer/rest/load
  • <h1> Download a model [#download-a-model]
10/docs/developer/rest/download
  • <h1> LM Studio API [#lm-studio-api]
    • <h2> What's new [#whats-new]
    • <h2> Supported endpoints [#supported-endpoints]
    • <h2> Inference endpoint comparison [#inference-endpoint-comparison]
40/docs/developer/rest
  • <h1> Get download status [#get-download-status]
10/docs/developer/rest/download-status
  • <h1> Idle TTL and Auto-Evict [#idle-ttl-and-auto-evict]
    • <h2> Background [#background]
    • <h2> Idle TTL [#idle-ttl]
      • <h3> Set App-default Idle TTL [#set-app-default-idle-ttl]
      • <h3> Set per-model TTL-model in API requests [#set-per-model-ttl-model-in-api-requests]
      • <h3> Set TTL for models loaded with lms [#set-ttl-for-models-loaded-with-lms]
      • <h3> Specify TTL when loading models in the server tab [#specify-ttl-when-loading-models-in-the-server-tab]
    • <h2> Configure Auto-Evict for JIT loaded models [#configure-auto-evict-for-jit-loaded-models]
      • <h3> Nomenclature [#nomenclature]
90/docs/developer/core/ttl-and-auto-evict
  • <h1> List your models [#list-your-models]
10/docs/developer/rest/list
  • <h1> Responses [#responses]
10/docs/developer/openai-compat/responses
  • <h1> Get up and running with the LM Studio API [#get-up-and-running-with-the-lm-studio-api]
    • <h2> Start the server [#start-the-server]
    • <h2> API Authentication [#api-authentication]
    • <h2> Chat with a model [#chat-with-a-model]
    • <h2> Use MCP servers via API [#use-mcp-servers-via-api]
    • <h2> Download a model [#download-a-model]
60/docs/developer/rest/quickstart
  • <h1> Setup llmster as a Startup Task on Linux [#setup-llmster-as-a-startup-task-on-linux]
    • <h2> Install the Daemon [#install-the-daemon]
    • <h2> Download a Model [#download-a-model]
    • <h2> Manual Test [#manual-test]
    • <h2> Create Systemd Service [#create-systemd-service]
    • <h2> Enable and Start the Service [#enable-and-start-the-service]
    • <h2> Verify [#verify]
    • <h2> Service Management [#service-management]
    • <h2> Community [#community]
90/docs/developer/core/headless_llmster
  • <h1> Stateful Chats [#stateful-chats]
    • <h2> How it works [#how-it-works]
    • <h2> Continue a conversation [#continue-a-conversation]
    • <h2> Disable stateful storage [#disable-stateful-storage]
40/docs/developer/rest/stateful-chats
  • <h1> Tool Use [#tool-use]
    • <h2> Quick Start [#quick-start]
      • <h3> 1. Start LM Studio as a server [#1-start-lm-studio-as-a-server]
      • <h3> 2. Load a Model [#2-load-a-model]
      • <h3> 3. Copy, Paste, and Run an Example! [#3-copy-paste-and-run-an-example]
    • <h2> Tool Use [#tool-use]
      • <h3> What really is "Tool Use"? [#what-really-is-tool-use]
      • <h3> High-level flow [#high-level-flow]
      • <h3> In-depth flow [#in-depth-flow]
    • <h2> Supported Models [#supported-models]
      • <h3> Native tool use support [#native-tool-use-support]
      • <h3> Default tool use support [#default-tool-use-support]
    • <h2> Example using curl [#example-using-curl]
    • <h2> Examples using python [#examples-using-python]
      • <h3> Single-turn example [#single-turn-example]
      • <h3> Multi-turn example [#multi-turn-example]
      • <h3> Advanced agent example [#advanced-agent-example]
      • <h3> Streaming [#streaming]
    • <h2> Community [#community]
190/docs/developer/openai-compat/tools
  • <h1> Unload a model [#unload-a-model]
10/docs/developer/rest/unload
  • <h1> Using LM Link [#using-lm-link]
    • <h2> Overview [#overview]
    • <h2> Use the REST API as normal [#use-the-rest-api-as-normal]
30/docs/developer/core/lmlink
  • <h1> Embeddings [#embeddings]
10/docs/developer/openai-compat/embeddings
  • <h1> Completions (Legacy) [#completions-legacy]
10/docs/developer/openai-compat/completions
  • <h1> Messages [#messages]
10/docs/developer/anthropic-compat/messages
  • <h1> Using MCP via API [#using-mcp-via-api]
    • <h2> How it works [#how-it-works]
    • <h2> Ephemeral vs mcp.json servers [#ephemeral-vs-mcpjson-servers]
    • <h2> Ephemeral MCP servers [#ephemeral-mcp-servers]
    • <h2> MCP servers from mcp.json [#mcp-servers-from-mcpjson]
    • <h2> Restricting tool access [#restricting-tool-access]
    • <h2> Custom headers for ephemeral servers [#custom-headers-for-ephemeral-servers]
70/docs/developer/core/mcp
  • <h1> Contributing [#contributing]
10/docs/cli/contributing
  • <h1> lms ps [#lms-ps]
    • <h2> List loaded models [#list-loaded-models]
      • <h3> JSON output [#json-output]
    • <h2> Operate on a remote LM Studio instance [#operate-on-a-remote-lm-studio-instance]
40/docs/cli/local-models/ps
  • <h1> lms server stop [#lms-server-stop]
10/docs/cli/serve/server-stop
  • <h1> lms link enable [#lms-link-enable]
    • <h2> Enable LM Link [#enable-lm-link]
      • <h3> Check the connection status [#check-the-connection-status]
      • <h3> Disable LM Link [#disable-lm-link]
      • <h3> Learn more [#learn-more]
50/docs/cli/link/link-enable
  • <h1> lms link disable [#lms-link-disable]
    • <h2> Disable LM Link [#disable-lm-link]
      • <h3> Learn more [#learn-more]
30/docs/cli/link/link-disable
  • <h1> lms link set-preferred-device [#lms-link-set-preferred-device]
    • <h2> Set a preferred device [#set-a-preferred-device]
      • <h3> Learn more [#learn-more]
30/docs/cli/link/link-set-preferred-device
  • <h1> Set a preferred device [#set-a-preferred-device]
    • <h2> Choosing a preferred device [#choosing-a-preferred-device]
      • <h3> Machines with GUI [#machines-with-gui]
      • <h3> Machines without GUI [#machines-without-gui]
40/docs/lmlink/basics/preferred-device
  • <h1> lms link set-device-name [#lms-link-set-device-name]
    • <h2> Rename this device [#rename-this-device]
      • <h3> Learn more [#learn-more]
30/docs/cli/link/link-set-device-name
  • <h1> Add a Device [#add-a-device]
    • <h2> Add a new device [#add-a-new-device]
      • <h3> Machines with GUI [#machines-with-gui]
      • <h3> Machines without GUI [#machines-without-gui]
    • <h2> Load models on remote machines [#load-models-on-remote-machines]
50/docs/lmlink/basics/add-device
  • <h1> Setup a link [#setup-a-link]
    • <h2> Requesting Access [#requesting-access]
20/docs/lmlink/basics
  • <h1> Frequently Asked Questions [#frequently-asked-questions]
    • <h2> Q&A [#qa]
20/docs/lmlink/basics/faq
  • <h1> Introduction to Tools Provider [#introduction-to-tools-provider]
    • <h2> Examples [#examples]
20/docs/typescript/plugins/tools-provider
  • <h1> Introduction [#introduction]
    • <h2> Examples [#examples]
20/docs/typescript/plugins/generator
  • <h1> Introduction [#introduction]
    • <h2> Types of Configurations [#types-of-configurations]
    • <h2> Examples [#examples]
30/docs/typescript/plugins/custom-configuration
  • <h1> Serve on Local Network [#serve-on-local-network]
10/docs/developer/core/server/serve-on-network
  • <h1> Multiple Tools [#multiple-tools]
10/docs/typescript/plugins/tools-provider/multiple-tools
  • <h1> Status Reports & Warnings [#status-reports--warnings]
    • <h2> Handling Aborts [#handling-aborts]
20/docs/typescript/plugins/tools-provider/status-reports-and-warnings
  • <h1> Handling Aborts [#handling-aborts]
10/docs/typescript/plugins/tools-provider/handling-aborts
  • <h1> Custom Configuration [#custom-configuration]
10/docs/typescript/plugins/tools-provider/custom-configuration
  • <h1> Single Tool [#single-tool]
    • <h2> Tips [#tips]
20/docs/typescript/plugins/tools-provider/single-tool
  • <h1> Handling Aborts [#handling-aborts]
10/docs/typescript/plugins/prompt-preprocessor/handling-aborts
  • <h1> Custom Configuration [#custom-configuration]
10/docs/typescript/plugins/prompt-preprocessor/custom-configuration
  • <h1> Custom Status Report [#custom-status-report]
10/docs/typescript/plugins/prompt-preprocessor/custom-status-report
  • <h1> Tool calling generators [#tool-calling-generators]
10/docs/typescript/plugins/generator/tool-calling-generators
  • <h1> Text-only Generators [#text-only-generators]
    • <h2> Custom Configurations [#custom-configurations]
    • <h2> Handling Aborts [#handling-aborts]
30/docs/typescript/plugins/generator/text-only-generators
  • <h1> Defining New Fields [#defining-new-fields]
10/docs/typescript/plugins/custom-configuration/defining-new-fields
  • <h1> Accessing Configuration [#accessing-configuration]
10/docs/typescript/plugins/custom-configuration/accessing-config
  • <h1> config.ts File [#configts-file]
10/docs/typescript/plugins/custom-configuration/config-ts
No rows found, please edit your search term.

Redirected URLs

Found 19 row(s).
StatusRedirected URL 🔼Target URLFound at URL
307 /docs/docs/app
307 /docs/advanced/sideload/docs/app/basics/import-model/docs/app/basics
307 /docs/api/openai-api/docs/developer/openai-compat/docs/app
307 /docs/api/rest-api/docs/developer/rest-api/docs/app
307 /docs/api/ttl-and-auto-evict/docs/developer/core/ttl-and-auto-evict/docs/typescript/manage-models/loading
307 /docs/app/api/ttl-and-auto-evict/docs/developer/core/ttl-and-auto-evict/docs/python/manage-models/loading
307 /docs/app/basics/import-model/docs/app/advanced/import-model/docs/advanced/sideload
308 /docs/app/modelyaml//docs/app/modelyaml/docs/app/modelyaml/publish
307 /docs/app/plugins/mcp/docs/app/mcp/docs/app
307 /docs/app/plugins/mcp/deeplink/docs/app/mcp/deeplink/docs/app
307 /docs/cli/get/docs/cli/local-models/get/docs/typescript
307 /docs/cli/ls/docs/cli/local-models/ls/docs/typescript/manage-models/list-downloaded
307 /docs/cli/ps/docs/cli/local-models/ps/docs/typescript/manage-models/list-loaded
307 /docs/configuration/per-model/docs/app/advanced/per-model/docs/app/presets
307 /docs/developer/core/tools/docs/developer/openai-compat/tools/docs/developer/api-changelog
307 /docs/developer/openai-api/docs/developer/openai-compat/docs/developer/core/ttl-and-auto-evict
307 /docs/developer/rest-api/docs/developer/rest/docs/api/rest-api
307 /docs/modes/docs/app/user-interface/modes/docs/app/user-interface/languages
307 /docs/system-requirements/docs/app/system-requirements/docs/app/basics
No rows found, please edit your search term.

Skipped URLs Summary

Found 21 row(s).
ReasonDomainUnique URLs 🔽
Not allowed hostgithub.com71
Not allowed hostplatform.openai.com7
Not allowed hostmodel.lmstudio.ai6
Not allowed hosthuggingface.co2
Not allowed hostdiscord.gg2
Not allowed hostdeno.com1
Not allowed hostforms.gle1
Not allowed hostvorpus.org1
Not allowed hostdocs.continue.dev1
Not allowed hosten.wikipedia.org1
Not allowed hostanyio.readthedocs.io1
Not allowed hostjcristharif.com1
Not allowed hostplatform.claude.com1
Not allowed hostdeveloper.mozilla.org1
Not allowed hostgit-scm.com1
Not allowed hostwww.linkedin.com1
Not allowed hostdocs.python.org1
Not allowed hostzod.dev1
Not allowed hostdocs.pydantic.dev1
Not allowed hostjson-schema.org1
Not allowed hosttwitter.com1
No rows found, please edit your search term.

Skipped URLs

Found 104 row(s).
ReasonSkipped URL 🔼SourceFound at URL
Not allowed hosthttps://anyio.readthedocs.io/en/stable/cancellation.html<a href>/docs/python
Not allowed hosthttps://deno.com/<a href>/docs/typescript/plugins/tools-provider
Not allowed hosthttps://developer.mozilla.org/en-US/docs/Web/API/AbortSignal<a href>/docs/typescript/plugins/tools-provider/status-reports-and-warnings
Not allowed hosthttps://discord.gg/aPQfnNkxGC<a href>/docs/app
Not allowed hosthttps://discord.gg/lmstudio<a href>/docs/app
Not allowed hosthttps://docs.continue.dev/customize/model-providers/more/lmstudio<a href>/docs/developer/core/ttl-and-auto-evict
Not allowed hosthttps://docs.pydantic.dev/<a href>/docs/python/llm-prediction/structured-response
Not allowed hosthttps://docs.python.org/3/library/asyncio-task.html<a href>/docs/python
Not allowed hosthttps://en.wikipedia.org/wiki/Jinja_(template_engine)<a href>/docs/app/advanced/prompt-template
Not allowed hosthttps://forms.gle/ZPfGLMvVC6DbSRQm9<a href>/docs/typescript/plugins/dependencies
Not allowed hosthttps://git-scm.com/download/win<a href>/docs/developer/openai-compat/structured-output
Not allowed hosthttps://github.com/0haris0<a href>/docs/app/user-interface/languages
Not allowed hosthttps://github.com/AbiruzzamanMolla<a href>/docs/app/user-interface/languages
Not allowed hosthttps://github.com/Adjacentai<a href>/docs/app/user-interface/languages
Not allowed hosthttps://github.com/AlexisGross<a href>/docs/app/user-interface/languages
Not allowed hosthttps://github.com/Autumn-Whisper<a href>/docs/app/user-interface/languages
Not allowed hosthttps://github.com/Bl4ck-D0g<a href>/docs/app/user-interface/languages
Not allowed hosthttps://github.com/BrassaiKao<a href>/docs/app/user-interface/languages
Not allowed hosthttps://github.com/DenisZekiria<a href>/docs/app/user-interface/languages
Not allowed hosthttps://github.com/Exlo84<a href>/docs/app/user-interface/languages
Not allowed hosthttps://github.com/Goekdeniz-Guelmez<a href>/docs/app/user-interface/languages
Not allowed hosthttps://github.com/Gopro3010<a href>/docs/app/user-interface/languages
Not allowed hosthttps://github.com/HostFly<a href>/docs/app/user-interface/languages
Not allowed hosthttps://github.com/Mekemoka<a href>/docs/app/user-interface/languages
Not allowed hosthttps://github.com/MotyaDev<a href>/docs/app/user-interface/languages
Not allowed hosthttps://github.com/NHLOCAL<a href>/docs/app/user-interface/languages
Not allowed hosthttps://github.com/Plexi09<a href>/docs/app/user-interface/languages
Not allowed hosthttps://github.com/Sm1g00l<a href>/docs/app/user-interface/languages
Not allowed hosthttps://github.com/SweetDream0256<a href>/docs/app/user-interface/languages
Not allowed hosthttps://github.com/Tonband<a href>/docs/app/user-interface/languages
Not allowed hosthttps://github.com/alaaf11<a href>/docs/app/user-interface/languages
Not allowed hosthttps://github.com/alexandrughinea<a href>/docs/app/user-interface/languages
Not allowed hosthttps://github.com/altiereslima<a href>/docs/app/user-interface/languages
Not allowed hosthttps://github.com/catarino<a href>/docs/app/user-interface/languages
Not allowed hosthttps://github.com/ceshizhuanyong895<a href>/docs/app/user-interface/languages
Not allowed hosthttps://github.com/danieltechdev<a href>/docs/app/user-interface/languages
Not allowed hosthttps://github.com/darwindev<a href>/docs/app/user-interface/languages
Not allowed hosthttps://github.com/digitalsp<a href>/docs/app/user-interface/languages
Not allowed hosthttps://github.com/divergentti<a href>/docs/app/user-interface/languages
Not allowed hosthttps://github.com/dottxt-ai/outlines<a href>/docs/developer/openai-compat/structured-output
Not allowed hosthttps://github.com/dwirx<a href>/docs/app/user-interface/languages
Not allowed hosthttps://github.com/enKl03B<a href>/docs/app/user-interface/languages
Not allowed hosthttps://github.com/evansrrr<a href>/docs/app/user-interface/languages
Not allowed hosthttps://github.com/fralapo<a href>/docs/app/user-interface/languages
Not allowed hosthttps://github.com/gnoparus<a href>/docs/app/user-interface/languages
Not allowed hosthttps://github.com/godkyo98<a href>/docs/app/user-interface/languages
Not allowed hosthttps://github.com/haqbany<a href>/docs/app/user-interface/languages
Not allowed hosthttps://github.com/hmelenok<a href>/docs/app/user-interface/languages
Not allowed hosthttps://github.com/ilikecatgirls<a href>/docs/app/user-interface/languages
Not allowed hosthttps://github.com/kywarai<a href>/docs/app/user-interface/languages
Not allowed hosthttps://github.com/ladislavsulc<a href>/docs/app/user-interface/languages
Not allowed hosthttps://github.com/lmstudio-ai<a href>/docs/app
Not allowed hosthttps://github.com/lmstudio-ai/lms<a href>/docs/cli
Not allowed hosthttps://github.com/lmstudio-ai/lmstudio-bug-tracker<a href>/docs/app/system-requirements
Not allowed hosthttps://github.com/lmstudio-ai/lmstudio-bug-tracker/issues<a href>/docs/developer/core/headless
Not allowed hosthttps://github.com/lmstudio-ai/lmstudio-bug-tracker/issues/89<a href>/docs/app/offline
Not allowed hosthttps://github.com/lmstudio-ai/lmstudio-bug-tracker/issues/9<a href>/docs/app/system-requirements
Not allowed hosthttps://github.com/lmstudio-ai/lmstudio-js<a href>/docs/typescript
Not allowed hosthttps://github.com/lmstudio-ai/lmstudio-python<a href>/docs/python
Not allowed hosthttps://github.com/lmstudio-ai/localization<a href>/docs/app/user-interface/languages
Not allowed hosthttps://github.com/lmstudio-ai/mlx-engine<a href>/docs/developer/openai-compat/structured-output
Not allowed hosthttps://github.com/marcelMaier<a href>/docs/app/user-interface/languages
Not allowed hosthttps://github.com/microsoft/playwright-mcp<a href>/docs/developer/core/authentication
Not allowed hosthttps://github.com/ml-explore/mlx<a href>/docs/app
Not allowed hosthttps://github.com/mlatysh<a href>/docs/app/user-interface/languages
Not allowed hosthttps://github.com/modelyaml/modelyaml<a href>/docs/app/modelyaml
Not allowed hosthttps://github.com/mohammad007kh<a href>/docs/app/user-interface/languages
Not allowed hosthttps://github.com/neotan<a href>/docs/app/user-interface/languages
Not allowed hosthttps://github.com/nikypalma<a href>/docs/app/user-interface/languages
Not allowed hosthttps://github.com/nossbar<a href>/docs/app/user-interface/languages
Not allowed hosthttps://github.com/prasanthc41m<a href>/docs/app/user-interface/languages
Not allowed hosthttps://github.com/progesor<a href>/docs/app/user-interface/languages
Not allowed hosthttps://github.com/reinew<a href>/docs/app/user-interface/languages
Not allowed hosthttps://github.com/seropheem<a href>/docs/app/user-interface/languages
Not allowed hosthttps://github.com/shadow01a<a href>/docs/app/user-interface/languages
Not allowed hosthttps://github.com/shelomitsky<a href>/docs/app/user-interface/languages
Not allowed hosthttps://github.com/suhailtajshaik<a href>/docs/app/user-interface/languages
Not allowed hosthttps://github.com/trinhvanminh<a href>/docs/app/user-interface/languages
Not allowed hosthttps://github.com/williamjeong2<a href>/docs/app/user-interface/languages
Not allowed hosthttps://github.com/xkonglong<a href>/docs/app/user-interface/languages
Not allowed hosthttps://github.com/xtianpaiva<a href>/docs/app/user-interface/languages
Not allowed hosthttps://github.com/zed-industries/zed/blob/main/crates/lmstudio/src/lmstudio.rs<a href>/docs/developer/core/ttl-and-auto-evict
Not allowed hosthttps://huggingface.co/docs/hub/en/security-tokens<a href>/docs/app/mcp
Not allowed hosthttps://huggingface.co/mlx-community/Qwen2.5-7B-Instruct-4bit/blob/…0fa35d9fed/tokenizer_config.json<a href>/docs/developer/openai-compat/tools
Not allowed hosthttps://jcristharif.com/msgspec/<a href>/docs/python/llm-prediction/structured-response
Not allowed hosthttps://json-schema.org/overview/what-is-jsonschema<a href>/docs/developer/openai-compat/structured-output
Not allowed hosthttps://model.lmstudio.ai/download/bartowski/Ministral-8B-Instruct-2410-GGUF<a href>/docs/developer/openai-compat/tools
Not allowed hosthttps://model.lmstudio.ai/download/lmstudio-community/Meta-Llama-3.1-8B-Instruct-GGUF<a href>/docs/developer/openai-compat/tools
Not allowed hosthttps://model.lmstudio.ai/download/lmstudio-community/Qwen2.5-7B-Instruct-GGUF<a href>/docs/typescript/agent/act
Not allowed hosthttps://model.lmstudio.ai/download/mlx-community/Meta-Llama-3.1-8B-Instruct-8bit<a href>/docs/developer/openai-compat/tools
Not allowed hosthttps://model.lmstudio.ai/download/mlx-community/Ministral-8B-Instruct-2410-4bit<a href>/docs/developer/openai-compat/tools
Not allowed hosthttps://model.lmstudio.ai/download/mlx-community/Qwen2.5-7B-Instruct-4bit<a href>/docs/developer/openai-compat/tools
Not allowed hosthttps://platform.claude.com/docs/en/api/messages/create<a href>/docs/developer/anthropic-compat/messages
Not allowed hosthttps://platform.openai.com/docs/api-reference/chat<a href>/docs/developer/openai-compat/chat-completions
Not allowed hosthttps://platform.openai.com/docs/api-reference/chat/create<a href>/docs/developer/openai-compat/chat-completions
Not allowed hosthttps://platform.openai.com/docs/api-reference/completions<a href>/docs/developer/openai-compat/completions
Not allowed hosthttps://platform.openai.com/docs/api-reference/embeddings<a href>/docs/developer/openai-compat/embeddings
Not allowed hosthttps://platform.openai.com/docs/api-reference/responses<a href>/docs/developer/openai-compat/responses
Not allowed hosthttps://platform.openai.com/docs/guides/function-calling<a href>/docs/developer/openai-compat/tools
Not allowed hosthttps://platform.openai.com/docs/guides/structured-outputs<a href>/docs/developer/openai-compat/structured-output
Not allowed hosthttps://twitter.com/lmstudio<a href>/docs/app
Not allowed hosthttps://vorpus.org/blog/notes-on-structured-concurrency-or-go-statement-considered-harmful/<a href>/docs/python
Not allowed hosthttps://www.linkedin.com/company/lmstudio-ai/<a href>/docs/app
Not allowed hosthttps://zod.dev/<a href>/docs/typescript/llm-prediction/structured-response
No rows found, please edit your search term.

External URLs

104 external URL(s)
Found 104 row(s).
External URLPages 🔽Found on URL (max 5)
https://anyio.readthedocs.io/en/stable/cancellation.html1/docs/python
https://deno.com/1/docs/typescript/plugins/tools-provider
https://developer.mozilla.org/en-US/docs/Web/API/AbortSignal1/docs/typescript/plugins/tools-provider/status-reports-and-warnings
https://discord.gg/aPQfnNkxGC1/docs/app
https://discord.gg/lmstudio1/docs/app
https://docs.continue.dev/customize/model-providers/more/lmstudio1/docs/developer/core/ttl-and-auto-evict
https://docs.pydantic.dev/1/docs/python/llm-prediction/structured-response
https://docs.python.org/3/library/asyncio-task.html1/docs/python
https://en.wikipedia.org/wiki/Jinja_(template_engine)1/docs/app/advanced/prompt-template
https://forms.gle/ZPfGLMvVC6DbSRQm91/docs/typescript/plugins/dependencies
https://git-scm.com/download/win1/docs/developer/openai-compat/structured-output
https://github.com/0haris01/docs/app/user-interface/languages
https://github.com/AbiruzzamanMolla1/docs/app/user-interface/languages
https://github.com/Adjacentai1/docs/app/user-interface/languages
https://github.com/AlexisGross1/docs/app/user-interface/languages
https://github.com/Autumn-Whisper1/docs/app/user-interface/languages
https://github.com/Bl4ck-D0g1/docs/app/user-interface/languages
https://github.com/BrassaiKao1/docs/app/user-interface/languages
https://github.com/DenisZekiria1/docs/app/user-interface/languages
https://github.com/Exlo841/docs/app/user-interface/languages
https://github.com/Goekdeniz-Guelmez1/docs/app/user-interface/languages
https://github.com/Gopro30101/docs/app/user-interface/languages
https://github.com/HostFly1/docs/app/user-interface/languages
https://github.com/Mekemoka1/docs/app/user-interface/languages
https://github.com/MotyaDev1/docs/app/user-interface/languages
https://github.com/NHLOCAL1/docs/app/user-interface/languages
https://github.com/Plexi091/docs/app/user-interface/languages
https://github.com/Sm1g00l1/docs/app/user-interface/languages
https://github.com/SweetDream02561/docs/app/user-interface/languages
https://github.com/Tonband1/docs/app/user-interface/languages
https://github.com/alaaf111/docs/app/user-interface/languages
https://github.com/alexandrughinea1/docs/app/user-interface/languages
https://github.com/altiereslima1/docs/app/user-interface/languages
https://github.com/catarino1/docs/app/user-interface/languages
https://github.com/ceshizhuanyong8951/docs/app/user-interface/languages
https://github.com/danieltechdev1/docs/app/user-interface/languages
https://github.com/darwindev1/docs/app/user-interface/languages
https://github.com/digitalsp1/docs/app/user-interface/languages
https://github.com/divergentti1/docs/app/user-interface/languages
https://github.com/dottxt-ai/outlines1/docs/developer/openai-compat/structured-output
https://github.com/dwirx1/docs/app/user-interface/languages
https://github.com/enKl03B1/docs/app/user-interface/languages
https://github.com/evansrrr1/docs/app/user-interface/languages
https://github.com/fralapo1/docs/app/user-interface/languages
https://github.com/gnoparus1/docs/app/user-interface/languages
https://github.com/godkyo981/docs/app/user-interface/languages
https://github.com/haqbany1/docs/app/user-interface/languages
https://github.com/hmelenok1/docs/app/user-interface/languages
https://github.com/ilikecatgirls1/docs/app/user-interface/languages
https://github.com/kywarai1/docs/app/user-interface/languages
https://github.com/ladislavsulc1/docs/app/user-interface/languages
https://github.com/lmstudio-ai1/docs/app
https://github.com/lmstudio-ai/lms1/docs/cli
https://github.com/lmstudio-ai/lmstudio-bug-tracker1/docs/app/system-requirements
https://github.com/lmstudio-ai/lmstudio-bug-tracker/issues1/docs/developer/core/headless
https://github.com/lmstudio-ai/lmstudio-bug-tracker/issues/891/docs/app/offline
https://github.com/lmstudio-ai/lmstudio-bug-tracker/issues/91/docs/app/system-requirements
https://github.com/lmstudio-ai/lmstudio-js1/docs/typescript
https://github.com/lmstudio-ai/lmstudio-python1/docs/python
https://github.com/lmstudio-ai/localization1/docs/app/user-interface/languages
https://github.com/lmstudio-ai/mlx-engine1/docs/developer/openai-compat/structured-output
https://github.com/marcelMaier1/docs/app/user-interface/languages
https://github.com/microsoft/playwright-mcp1/docs/developer/core/authentication
https://github.com/ml-explore/mlx1/docs/app
https://github.com/mlatysh1/docs/app/user-interface/languages
https://github.com/modelyaml/modelyaml1/docs/app/modelyaml
https://github.com/mohammad007kh1/docs/app/user-interface/languages
https://github.com/neotan1/docs/app/user-interface/languages
https://github.com/nikypalma1/docs/app/user-interface/languages
https://github.com/nossbar1/docs/app/user-interface/languages
https://github.com/prasanthc41m1/docs/app/user-interface/languages
https://github.com/progesor1/docs/app/user-interface/languages
https://github.com/reinew1/docs/app/user-interface/languages
https://github.com/seropheem1/docs/app/user-interface/languages
https://github.com/shadow01a1/docs/app/user-interface/languages
https://github.com/shelomitsky1/docs/app/user-interface/languages
https://github.com/suhailtajshaik1/docs/app/user-interface/languages
https://github.com/trinhvanminh1/docs/app/user-interface/languages
https://github.com/williamjeong21/docs/app/user-interface/languages
https://github.com/xkonglong1/docs/app/user-interface/languages
https://github.com/xtianpaiva1/docs/app/user-interface/languages
https://github.com/zed-industries/zed/blob/main/crates/lmstudio/src/lmstudio.rs1/docs/developer/core/ttl-and-auto-evict
https://huggingface.co/docs/hub/en/security-tokens1/docs/app/mcp
https://huggingface.co/mlx-community/Qwen2.5-7B-Instruct-4bit/blob/…0fa35d9fed/tokenizer_config.json1/docs/developer/openai-compat/tools
https://jcristharif.com/msgspec/1/docs/python/llm-prediction/structured-response
https://json-schema.org/overview/what-is-jsonschema1/docs/developer/openai-compat/structured-output
https://model.lmstudio.ai/download/bartowski/Ministral-8B-Instruct-2410-GGUF1/docs/developer/openai-compat/tools
https://model.lmstudio.ai/download/lmstudio-community/Meta-Llama-3.1-8B-Instruct-GGUF1/docs/developer/openai-compat/tools
https://model.lmstudio.ai/download/lmstudio-community/Qwen2.5-7B-Instruct-GGUF1/docs/typescript/agent/act
https://model.lmstudio.ai/download/mlx-community/Meta-Llama-3.1-8B-Instruct-8bit1/docs/developer/openai-compat/tools
https://model.lmstudio.ai/download/mlx-community/Ministral-8B-Instruct-2410-4bit1/docs/developer/openai-compat/tools
https://model.lmstudio.ai/download/mlx-community/Qwen2.5-7B-Instruct-4bit1/docs/developer/openai-compat/tools
https://platform.claude.com/docs/en/api/messages/create1/docs/developer/anthropic-compat/messages
https://platform.openai.com/docs/api-reference/chat1/docs/developer/openai-compat/chat-completions
https://platform.openai.com/docs/api-reference/chat/create1/docs/developer/openai-compat/chat-completions
https://platform.openai.com/docs/api-reference/completions1/docs/developer/openai-compat/completions
https://platform.openai.com/docs/api-reference/embeddings1/docs/developer/openai-compat/embeddings
https://platform.openai.com/docs/api-reference/responses1/docs/developer/openai-compat/responses
https://platform.openai.com/docs/guides/function-calling1/docs/developer/openai-compat/tools
https://platform.openai.com/docs/guides/structured-outputs1/docs/developer/openai-compat/structured-output
https://twitter.com/lmstudio1/docs/app
https://vorpus.org/blog/notes-on-structured-concurrency-or-go-statement-considered-harmful/1/docs/python
https://www.linkedin.com/company/lmstudio-ai/1/docs/app
https://zod.dev/1/docs/typescript/llm-prediction/structured-response
No rows found, please edit your search term.

Content types

Content typeURLs 🔽Total sizeTotal timeAvg timeStatus 20xStatus 30xStatus 40x
HTML174110 MB45 s258 ms 156 018
Redirect192 kB1.9 s99 ms 019 0

Content types (MIME types)

Content typeURLs 🔽Total sizeTotal timeAvg timeStatus 20xStatus 30xStatus 40x
text/html; charset=utf-8174110 MB45 s258 ms 156 018
text / html192 kB1.9 s99 ms 019 0

Source domains

DomainTotalsHTMLRedirect
lmstudio.ai193 / 110MB / 46s174 / 110MB / 45s19 / 2kB / 1.9s

HTTP headers

Found 21 row(s).
Header 🔼OccursUniqueValues previewMin valueMax value
Cache-Control1742s-maxage=31536000 (156) / private, no-cache, no-store, max-age=0, must-revalidate (18)
Cf-Placement1931local-FRA
Cf-Ray193-[ignored generic values]
Content-Security-Policy1741default-src 'self'; script-src 'self' 'unsafe-inline' 'unsafe-eval' https://plau…e'; frame-src 'self'; child-src 'self';
Content-Type1932text/html; charset=utf-8 (174) / text/html (19)
Date193-[ignored generic values]2026-03-242026-03-24
Location1917[see values below]
Nel1932[see values below]
Referrer-Policy1741strict-origin-when-cross-origin
Report-To19320+[see values below]
Server1931cloudflare
Server-Timing19320+[see values below]
Strict-Transport-Security1741max-age=259200
Vary1752rsc, next-router-state-tree, next-router-prefetch, next-router-segment-prefetch (174) / Accept-Encoding (1)
X-Content-Type-Options1741nosniff
X-Frame-Options1741DENY
X-Nextjs-Cache1741HIT
X-Nextjs-Prerender17411,1
X-Nextjs-Stale-Time1741300
X-Opennext17411
X-Powered-By1741Next.js
No rows found, please edit your search term.

HTTP header values

Found 77 row(s).
HeaderOccursValue
Cache-Control156s-maxage=31536000
Cache-Control18private, no-cache, no-store, max-age=0, must-revalidate
Cf-Placement193local-FRA
Content-Security-Policy174default-src 'self'; script-src 'self' 'unsafe-inline' 'unsafe-eval' https://plausible.io https://static.cloudflareinsights.com; connect-src 'self' https://plausible.io https://cloudflareinsights.com http://127.0.0.1:*; img-src 'self' data: blob: https://*.lmstudio.com https://*.lmstudio.ai https://lmstudio.ai https://lmstudio.com https://huggingface.co https://*.huggingface.co; media-src 'self' https://*.lmstudio.com https://*.lmstudio.ai; style-src 'self' 'unsafe-inline'; font-src 'self'; object-src 'none'; base-uri 'self'; form-action 'self'; frame-ancestors 'none'; frame-src 'self'; child-src 'self';
Content-Type174text/html; charset=utf-8
Content-Type19text / html
Location2/docs/developer/openai-compat
Location2/docs/developer/core/ttl-and-auto-evict
Location1/docs/app/user-interface/modes
Location1/docs/app/mcp
Location1/docs/cli/local-models/ps
Location1/docs/app/system-requirements
Location1/docs/cli/local-models/get
Location1/docs/app/advanced/import-model
Location1/docs/cli/local-models/ls
Location1/docs/developer/rest
Location1/docs/app/modelyaml
Location1/docs/app/advanced/per-model
Location1/docs/developer/rest-api
Location1/docs/app/mcp/deeplink
Location1/docs/app/basics/import-model
Location1/docs/app
Location1/docs/developer/openai-compat/tools
Nel192{"report_to":"cf-nel","success_fraction":0.0,"max_age":604800}
Nel1{"success_fraction":0,"report_to":"cf-nel","max_age":604800}
Referrer-Policy174strict-origin-when-cross-origin
Report-To1{"group":"cf-nel","max_age":604800,"endpoints":[{"url":"https://a.nel.cloudflare.com/report/v4?s=VeCHyvSVxgIDwFKyIJsf87zM2L%2FOWkkw8yaY9GUMeIpxl%2FqnLSM6C5TpvS2o7%2FVTQRJoyZBFsUqQA8FEXQZ%2FFzjTnnUQYbDRwPkJ7GRQMAHdG8CfbKM%3D"}]}
Report-To1{"group":"cf-nel","max_age":604800,"endpoints":[{"url":"https://a.nel.cloudflare.com/report/v4?s=YSbAUbKqDtKrxb6jbxw2j34thDN8fUoPo61kNP0K5byWVKXe3FTPj24VfZEgSK%2BvLXFpObx7g3XG7FM%2FtjGC5b64UHDnzQXVWUQuno%2FjAWFUDHEpiJE%3D"}]}
Report-To1{"group":"cf-nel","max_age":604800,"endpoints":[{"url":"https://a.nel.cloudflare.com/report/v4?s=CLVFsj%2FfTa09eKGbmR37%2B6%2FHu7lPJZU6pomPTkgyncxjN%2FltDoSZSq8fF3dgsjnVbtiCdvt23hy25qi7ryEgOtzfK%2BS2VB2efXpAYgMFecX5NY5gME0%3D"}]}
Report-To1{"group":"cf-nel","max_age":604800,"endpoints":[{"url":"https://a.nel.cloudflare.com/report/v4?s=cSU5XS252eo2AxtPXSVFhXdZqD2rpgJUd3%2B6ojwUjV%2FGCgnlkdWAI1E8lkFZWjMITnsNb%2B7McK6A7fyXr26zhGS9p9nPHu9ey%2B1yOGJRTKxEp8GWfpQ%3D"}]}
Report-To1{"group":"cf-nel","max_age":604800,"endpoints":[{"url":"https://a.nel.cloudflare.com/report/v4?s=9dDmLYe7EceYNpsfUrP08O%2F8bDupLJHcUvx69Nqbf7r%2FNwgL9cKMgBlcWsVwMUC1SGaWm5SeTuHiNy3r9aRpmLF8HowzRDn7zWOFiU4U%2FEkpT4XMRYU%3D"}]}
Report-To1{"group":"cf-nel","max_age":604800,"endpoints":[{"url":"https://a.nel.cloudflare.com/report/v4?s=RS%2FaH%2FEvawG7OXbGKEapVlqgjvh6RYoE0pgBcYSNq4ZX2Fny%2FnGPRLbCrjooax4MPDqhEuZ3l8Ch4bq%2F7ejQr3bmvtZxL5d1gTd3GM3kMtKWSM8TlB8%3D"}]}
Report-To1{"group":"cf-nel","max_age":604800,"endpoints":[{"url":"https://a.nel.cloudflare.com/report/v4?s=%2Fh4poeJ8upuGwhUoO%2B51QLjRQ41apLrfzOBk5JdSyTk%2Fj8O8tZgESRjOQGdmn7AvLsh346%2Fc3L1U8BB6V3oJj%2BHRJ%2BA%2FlUQgluinDGFr4G2%2F1nVTG6s%3D"}]}
Report-To1{"group":"cf-nel","max_age":604800,"endpoints":[{"url":"https://a.nel.cloudflare.com/report/v4?s=Rnu1UydFoKWvuZn8sJ8tr8VNFxLj7qmtO4aN%2BQYK2EGyM8QI86g12A5QFR3Xxn9L0c%2BAsqYhEkpf9f7lDMnDycoEP%2BtHxhegCY5WK%2Fuun0BZzGB%2BghM%3D"}]}
Report-To1{"group":"cf-nel","max_age":604800,"endpoints":[{"url":"https://a.nel.cloudflare.com/report/v4?s=DN11nWPrkjE3nWiGdcJhqkMHLqqbNcR72ZT103pgx34suU5WSDbaYv%2FAFR1GVgcOYVGt4ioAUAb8MkIiMEnN6kWRTQ7W45ciKNM3S3dHfMoQnLT0tf0%3D"}]}
Report-To1{"group":"cf-nel","max_age":604800,"endpoints":[{"url":"https://a.nel.cloudflare.com/report/v4?s=76SjBwFhs1di6JIWGtmVlNYLc8qfd0cA83J928gAfHzHZbiFx8QS41qol9%2FEmf%2B%2B%2FCQtosNmsm%2F1oi2BimSgydoaCMH3ludy4i4Oxd3grUcdI5HV4Og%3D"}]}
Report-To1{"group":"cf-nel","max_age":604800,"endpoints":[{"url":"https://a.nel.cloudflare.com/report/v4?s=lO6g%2BUhRO6JLdEfz1bc60SSHDCjJDf0RoYLbl7UTnN7otAGYUtZ7yfE1vGorpjaBA5jX0QQVPKyn9rssH3K%2F1mOgOOwwu8DzS%2BOlGHAKc%2Fb7eZ3mnZM%3D"}]}
Report-To1{"group":"cf-nel","max_age":604800,"endpoints":[{"url":"https://a.nel.cloudflare.com/report/v4?s=skOxElQG%2FUeGqpq%2BoyF0NJyLLTa0AHhkiuLYgz5kZj%2FswULDA3pay5xg3nZJyrgtxE6ZVmKrBzo2jlgT5mnMZkhOj2wjTkLskGwP570sWQygrPCunZA%3D"}]}
Report-To1{"group":"cf-nel","max_age":604800,"endpoints":[{"url":"https://a.nel.cloudflare.com/report/v4?s=STLf5E3wmrATe5uLQ3N62uVWPRU47FBfahR3tShpDzkHNmScL4UQXwpFUiDNJdWn8hWWCN4Ia4IE3YYNSSX2K6gPLdCNToNoEESVUfw88XgBJIAi02g%3D"}]}
Report-To1{"group":"cf-nel","max_age":604800,"endpoints":[{"url":"https://a.nel.cloudflare.com/report/v4?s=8mEk4ggjbNuknfR7Iemf6PmibYQkK10nEYo5hgSq9hzWusOSvOp0Yg264EjLHsbrnq8K5%2Bh5lo7HLRSOXmhnWgXbi2VYPlkZgAyt7uLc5P1hwN8UmzE%3D"}]}
Report-To1{"group":"cf-nel","max_age":604800,"endpoints":[{"url":"https://a.nel.cloudflare.com/report/v4?s=cVRdATgmaXAB8IZ7c4Nzn8MY7uJ%2F6%2B80cq%2BYdyI0YXmsa7WFiqSxoLy1wvpnwTPKjrmDLO7OpcVGevpeUJP8ojjwHwmBkZ9NuN2zWUPAhUDlO%2Bh%2BJh0%3D"}]}
Report-To1{"group":"cf-nel","max_age":604800,"endpoints":[{"url":"https://a.nel.cloudflare.com/report/v4?s=RtB1Mf01er77YxAqxVj0K0vbss%2Fcr8aadX7ndi2G%2FWai64SJ3e3K98emJaqKejk%2Bu5rZSexMEYKmMQJ3Hvh4cBMxTHWj%2BwgsEwjEFXOx%2FtVC5Lf3L1Q%3D"}]}
Report-To1{"group":"cf-nel","max_age":604800,"endpoints":[{"url":"https://a.nel.cloudflare.com/report/v4?s=CHiBY6au6U2GHGt2AefDZAd6Ear2pKVLVR6WSfceo4oHhVidJccryL70KntXB4YICGpj7ZIrEua%2Fdhg2dc7a7HMsOvBfpDHbTjozZ3vyVl0K61vzzgo%3D"}]}
Report-To1{"group":"cf-nel","max_age":604800,"endpoints":[{"url":"https://a.nel.cloudflare.com/report/v4?s=4XK855%2FNSG%2FNQGHcIs6SQgFYeH5dK9ADnsoTebh26e%2FHf1mt6kiC9cgPdygzyK5%2FXjAHo9T2QWoWd%2BNB9Q%2F8itr6pDign4fPyCGWex5rTUUAQ%2F6uXNk%3D"}]}
Report-To1{"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report\/v4?s=mwLslhq8a%2FyNwxlHd1OuWSEruLqHBOWsVRMYKx3nowudWdaEiYOd5uGxMb9srcX85f4Ota48PLqZwLJcWjo%2FyoQPMxvSr%2FYBPxKlejZkzmzMBSuF%2B8XFIIvzV0mtE8X%2B5QvXpPWA4bio"}],"group":"cf-nel","max_age":604800}
Report-To1{"group":"cf-nel","max_age":604800,"endpoints":[{"url":"https://a.nel.cloudflare.com/report/v4?s=gGy1M%2BA85GvWD%2BinOSJcToGrCO41XHc1o53aDXwuPobj1s6HP0Gqk54H8Lq24sNsfmybSHY4%2FSRvdnDQLYUnfdEGpKGAa5xDlyCXOiiXtI9YO%2BDgLh4%3D"}]}
Server193cloudflare
Server-Timing1cfEdge;dur=944,cfOrigin;dur=0,cfWorker;dur=340
Server-Timing1cfEdge;dur=13,cfOrigin;dur=0,cfWorker;dur=253
Server-Timing1cfEdge;dur=6,cfOrigin;dur=0,cfWorker;dur=47
Server-Timing1cfEdge;dur=12,cfOrigin;dur=0,cfWorker;dur=221
Server-Timing1cfEdge;dur=7,cfOrigin;dur=0,cfWorker;dur=63
Server-Timing1cfEdge;dur=6,cfOrigin;dur=0,cfWorker;dur=115
Server-Timing1cfEdge;dur=6,cfOrigin;dur=0,cfWorker;dur=30
Server-Timing1cfEdge;dur=7,cfOrigin;dur=0,cfWorker;dur=34
Server-Timing1cfEdge;dur=11,cfOrigin;dur=0,cfWorker;dur=341
Server-Timing1cfEdge;dur=28,cfOrigin;dur=0,cfWorker;dur=394
Server-Timing1cfEdge;dur=543,cfOrigin;dur=0,cfWorker;dur=407
Server-Timing1cfEdge;dur=5,cfOrigin;dur=0,cfWorker;dur=34
Server-Timing1cfEdge;dur=6,cfOrigin;dur=0,cfWorker;dur=34
Server-Timing1cfOrigin;dur=0,cfEdge;dur=0, cfL4;desc="?proto=TCP&rtt=20648&min_rtt=20564&rtt_var=4381&sent=10&recv=11&lost=0&retrans=0&sent_bytes=4870&recv_bytes=861&delivery_rate=196034&cwnd=57&unsent_bytes=0&cid=fce0390878684c22&ts=151&x=0"
Server-Timing1cfEdge;dur=843,cfOrigin;dur=0,cfWorker;dur=297
Server-Timing1cfEdge;dur=743,cfOrigin;dur=0,cfWorker;dur=386
Server-Timing1cfEdge;dur=12,cfOrigin;dur=0,cfWorker;dur=83
Server-Timing1cfEdge;dur=642,cfOrigin;dur=0,cfWorker;dur=366
Server-Timing1cfEdge;dur=7,cfOrigin;dur=0,cfWorker;dur=1
Server-Timing1cfEdge;dur=9,cfOrigin;dur=0,cfWorker;dur=242
Strict-Transport-Security174max-age=259200
Vary174rsc, next-router-state-tree, next-router-prefetch, next-router-segment-prefetch
Vary1Accept-Encoding
X-Content-Type-Options174nosniff
X-Frame-Options174DENY
X-Nextjs-Cache174HIT
X-Nextjs-Prerender1741,1
X-Nextjs-Stale-Time174300
X-Opennext1741
X-Powered-By174Next.js
No rows found, please edit your search term.

HTTP Caching by content type (only from crawlable domains)

Content typeCache typeURLs 🔽AVG lifetimeMIN lifetimeMAX lifetime
HTMLCache-Control1740 s 0 s 0 s
RedirectNo cache headers19---

HTTP Caching by domain

DomainCache typeURLs 🔽AVG lifetimeMIN lifetimeMAX lifetime
lmstudio.aiCache-Control1740 s 0 s 0 s
lmstudio.aiNo cache headers19---

HTTP Caching by domain and content type

DomainContent typeCache typeURLs 🔽AVG lifetimeMIN lifetimeMAX lifetime
lmstudio.aiHTMLCache-Control1740 s 0 s 0 s
lmstudio.aiRedirectNo cache headers19---

DNS info

DNS resolving tree
lmstudio.ai
  IPv4: 172.67.69.92
  IPv4: 104.26.7.153
  IPv4: 104.26.6.153
  IPv6: 2606:4700:20::681a:799
  IPv6: 2606:4700:20::ac43:455c
  IPv6: 2606:4700:20::681a:699
DNS server: 127.0.0.53

SSL/TLS info

InfoText
IssuerC = US, O = Google Trust Services, CN = WE1
SubjectCN = lmstudio.ai
Valid fromMar 10 10:10:08 2026 GMT (VALID already 14.2 day(s))
Valid toJun  8 11:09:59 2026 GMT (VALID still for 75.9 day(s))
Supported protocolsTLSv1.2, TLSv1.3
RAW certificate outputCertificate:
    Data:
        Version: 3 (0x2)
        Serial Number:
            fb:cc:95:30:de:a0:22:f4:0e:c9:11:fd:61:2d:50:8e
        Signature Algorithm: ecdsa-with-SHA256
        Issuer: C = US, O = Google Trust Services, CN = WE1
        Validity
            Not Before: Mar 10 10:10:08 2026 GMT
            Not After : Jun  8 11:09:59 2026 GMT
        Subject: CN = lmstudio.ai
        Subject Public Key Info:
            Public Key Algorithm: id-ecPublicKey
                Public-Key: (256 bit)
                pub:
                    04:21:7c:4c:94:93:5e:a5:50:55:d0:dc:70:56:5e:
                    08:67:f5:c5:b4:03:59:4e:6f:58:31:93:fc:4a:29:
                    ee:a3:1c:a4:c3:60:27:10:6e:cd:da:c1:9a:3a:7e:
                    e1:41:02:94:6d:36:51:7e:7f:76:ac:9b:3f:27:df:
                    0b:29:81:0f:5e
                ASN1 OID: prime256v1
                NIST CURVE: P-256
        X509v3 extensions:
            X509v3 Key Usage: critical
                Digital Signature
            X509v3 Extended Key Usage: 
                TLS Web Server Authentication
            X509v3 Basic Constraints: critical
                CA:FALSE
            X509v3 Subject Key Identifier: 
                42:20:55:76:EA:01:FE:15:3F:C6:60:4B:9D:A8:E7:FC:13:B3:8E:F3
            X509v3 Authority Key Identifier: 
                90:77:92:35:67:C4:FF:A8:CC:A9:E6:7B:D9:80:79:7B:CC:93:F9:38
            Authority Information Access: 
                OCSP - URI:http://o.pki.goog/s/we1/-8w
                CA Issuers - URI:http://i.pki.goog/we1.crt
            X509v3 Subject Alternative Name: 
                DNS:lmstudio.ai
            X509v3 Certificate Policies: 
                Policy: 2.23.140.1.2.1
            X509v3 CRL Distribution Points: 
                Full Name:
                  URI:http://c.pki.goog/we1/0rbAgG3gMgU.crl
            CT Precertificate SCTs: 
                Signed Certificate Timestamp:
                    Version   : v1 (0x0)
                    Log ID    : 49:9C:9B:69:DE:1D:7C:EC:FC:36:DE:CD:87:64:A6:B8:
                                5B:AF:0A:87:80:19:D1:55:52:FB:E9:EB:29:DD:F8:C3
                    Timestamp : Mar 10 11:10:08.751 2026 GMT
                    Extensions: none
                    Signature : ecdsa-with-SHA256
                                30:45:02:21:00:C0:64:FD:C6:78:4E:37:63:6B:78:E3:
                                5C:5D:B3:FD:A3:EB:7A:A5:F0:A5:C3:BC:8A:BA:46:33:
                                0D:EF:34:35:BF:02:20:47:4C:38:F1:EE:08:FE:0A:B8:
                                FD:D6:57:53:D6:EA:7C:37:60:98:ED:4C:29:FF:16:82:
                                2C:8A:AF:A2:7E:69:63
                Signed Certificate Timestamp:
                    Version   : v1 (0x0)
                    Log ID    : 96:97:64:BF:55:58:97:AD:F7:43:87:68:37:08:42:77:
                                E9:F0:3A:D5:F6:A4:F3:36:6E:46:A4:3F:0F:CA:A9:C6
                    Timestamp : Mar 10 11:10:08.787 2026 GMT
                    Extensions: none
                    Signature : ecdsa-with-SHA256
                                30:44:02:20:3F:DC:60:74:98:24:FF:2D:F7:7D:21:F4:
                                99:2F:9A:AF:F2:29:C1:45:A9:A3:3A:90:96:F0:2B:5E:
                                F6:D1:AC:59:02:20:76:6D:F8:83:41:BE:8A:9F:02:65:
                                6D:D5:A4:2A:EE:87:A1:F7:66:03:27:72:FD:57:BB:45:
                                58:EE:54:A5:16:AA
    Signature Algorithm: ecdsa-with-SHA256
    Signature Value:
        30:44:02:20:71:e0:5b:56:0f:52:c5:b1:45:78:6d:9b:65:42:
        8f:94:e3:d3:4a:3f:67:45:3a:7f:a6:ee:ff:67:80:c4:61:ba:
        02:20:5c:32:8a:3e:ea:e5:66:48:38:34:a5:9c:c9:e6:d2:fa:
        7a:90:86:c0:fc:0a:d6:9e:83:ac:32:d8:ea:84:74:d4
RAW protocols output
=== ssl2 ===
s_client: Unknown option: -ssl2
s_client: Use -help for summary.

=== ssl3 ===
s_client: Unknown option: -ssl3
s_client: Use -help for summary.

=== tls1 ===
403767E3DB720000:error:0A0000BF:SSL routines:tls_setup_handshake:no protocols available:../ssl/statem/statem_lib.c:104:
CONNECTED(00000003)
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 0 bytes and written 7 bytes
Verification: OK
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
Early data was not sent
Verify return code: 0 (ok)
---

=== tls1_1 ===
406726E6BA700000:error:0A0000BF:SSL routines:tls_setup_handshake:no protocols available:../ssl/statem/statem_lib.c:104:
CONNECTED(00000003)
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 0 bytes and written 7 bytes
Verification: OK
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
Early data was not sent
Verify return code: 0 (ok)
---

=== tls1_2 ===
depth=2 C = US, O = Google Trust Services LLC, CN = GTS Root R4
verify return:1
depth=1 C = US, O = Google Trust Services, CN = WE1
verify return:1
depth=0 CN = lmstudio.ai
verify return:1
CONNECTED(00000003)
---
Certificate chain
 0 s:CN = lmstudio.ai
   i:C = US, O = Google Trust Services, CN = WE1
   a:PKEY: id-ecPublicKey, 256 (bit); sigalg: ecdsa-with-SHA256
   v:NotBefore: Mar 10 10:10:08 2026 GMT; NotAfter: Jun  8 11:09:59 2026 GMT
 1 s:C = US, O = Google Trust Services, CN = WE1
   i:C = US, O = Google Trust Services LLC, CN = GTS Root R4
   a:PKEY: id-ecPublicKey, 256 (bit); sigalg: ecdsa-with-SHA384
   v:NotBefore: Dec 13 09:00:00 2023 GMT; NotAfter: Feb 20 14:00:00 2029 GMT
 2 s:C = US, O = Google Trust Services LLC, CN = GTS Root R4
   i:C = BE, O = GlobalSign nv-sa, OU = Root CA, CN = GlobalSign Root CA
   a:PKEY: id-ecPublicKey, 384 (bit); sigalg: RSA-SHA256
   v:NotBefore: Nov 15 03:43:21 2023 GMT; NotAfter: Jan 28 00:00:42 2028 GMT
---
Server certificate
-----BEGIN CERTIFICATE-----
MIIDlDCCAzugAwIBAgIRAPvMlTDeoCL0DskR/WEtUI4wCgYIKoZIzj0EAwIwOzEL
MAkGA1UEBhMCVVMxHjAcBgNVBAoTFUdvb2dsZSBUcnVzdCBTZXJ2aWNlczEMMAoG
A1UEAxMDV0UxMB4XDTI2MDMxMDEwMTAwOFoXDTI2MDYwODExMDk1OVowFjEUMBIG
A1UEAxMLbG1zdHVkaW8uYWkwWTATBgcqhkjOPQIBBggqhkjOPQMBBwNCAAQhfEyU
k16lUFXQ3HBWXghn9cW0A1lOb1gxk/xKKe6jHKTDYCcQbs3awZo6fuFBApRtNlF+
f3asmz8n3wspgQ9eo4ICQzCCAj8wDgYDVR0PAQH/BAQDAgeAMBMGA1UdJQQMMAoG
CCsGAQUFBwMBMAwGA1UdEwEB/wQCMAAwHQYDVR0OBBYEFEIgVXbqAf4VP8ZgS52o
5/wTs47zMB8GA1UdIwQYMBaAFJB3kjVnxP+ozKnme9mAeXvMk/k4MF4GCCsGAQUF
BwEBBFIwUDAnBggrBgEFBQcwAYYbaHR0cDovL28ucGtpLmdvb2cvcy93ZTEvLTh3
MCUGCCsGAQUFBzAChhlodHRwOi8vaS5wa2kuZ29vZy93ZTEuY3J0MBYGA1UdEQQP
MA2CC2xtc3R1ZGlvLmFpMBMGA1UdIAQMMAowCAYGZ4EMAQIBMDYGA1UdHwQvMC0w
K6ApoCeGJWh0dHA6Ly9jLnBraS5nb29nL3dlMS8wcmJBZ0czZ01nVS5jcmwwggED
BgorBgEEAdZ5AgQCBIH0BIHxAO8AdgBJnJtp3h187Pw23s2HZKa4W68Kh4AZ0VVS
++nrKd34wwAAAZzXcKFvAAAEAwBHMEUCIQDAZP3GeE43Y2t441xds/2j63ql8KXD
vIq6RjMN7zQ1vwIgR0w48e4I/gq4/dZXU9bqfDdgmO1MKf8WgiyKr6J+aWMAdQCW
l2S/VViXrfdDh2g3CEJ36fA61fak8zZuRqQ/D8qpxgAAAZzXcKGTAAAEAwBGMEQC
ID/cYHSYJP8t930h9Jkvmq/yKcFFqaM6kJbwK1720axZAiB2bfiDQb6KnwJlbdWk
Ku6HofdmAydy/Ve7RVjuVKUWqjAKBggqhkjOPQQDAgNHADBEAiBx4FtWD1LFsUV4
bZtlQo+U49NKP2dFOn+m7v9ngMRhugIgXDKKPurlZkg4NKWcyebS+nqQhsD8Ctae
g6wy2OqEdNQ=
-----END CERTIFICATE-----
subject=CN = lmstudio.ai
issuer=C = US, O = Google Trust Services, CN = WE1
---
No client certificate CA names sent
Peer signing digest: SHA256
Peer signature type: ECDSA
Server Temp Key: X25519, 253 bits
---
SSL handshake has read 2960 bytes and written 293 bytes
Verification: OK
---
New, TLSv1.2, Cipher is ECDHE-ECDSA-CHACHA20-POLY1305
Server public key is 256 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
    Protocol  : TLSv1.2
    Cipher    : ECDHE-ECDSA-CHACHA20-POLY1305
    Session-ID: 27362A934F2E54A3BD13EA9D6A5D260AD891F0316F533AD8BB3E669ED6FE7C42
    Session-ID-ctx: 
    Master-Key: 031B025145A0124950414C83500C575FBBCD51FC2E7772332B702391508ABED2C6384046651073E95268625B1899B40D
    PSK identity: None
    PSK identity hint: None
    SRP username: None
    TLS session ticket lifetime hint: 64800 (seconds)
    TLS session ticket:
    0000 - 9f ed 9e a2 1b fe 30 fd-36 4c 99 ac f4 5a 39 67   ......0.6L...Z9g
    0010 - e6 22 ed f5 e8 bf 28 10-37 98 6b b8 22 05 cb 9f   ."....(.7.k."...
    0020 - bc 78 1f 9a a4 2b 12 8a-07 7f b2 3c 99 03 5b 25   .x...+.....<..[%
    0030 - 07 c5 f2 19 8a 4d e2 0c-a6 19 cc 26 97 12 6a ad   .....M.....&..j.
    0040 - 80 be 28 0d 38 61 8f 56-27 de f1 eb 52 74 ba 8e   ..(.8a.V'...Rt..
    0050 - 84 54 e5 60 3c d2 96 d2-e1 f3 38 a7 54 18 38 d7   .T.`<.....8.T.8.
    0060 - dd 10 17 6f c2 6f c7 1c-10 87 4e ac c8 df 32 4a   ...o.o....N...2J
    0070 - 91 16 c5 9c 3e 10 32 68-ed af f8 fe a7 76 a1 0d   ....>.2h.....v..
    0080 - 86 3a e9 a9 23 9b 76 78-57 c3 04 a9 8f ce be de   .:..#.vxW.......
    0090 - 34 03 e9 06 6b ec 39 a6-60 4c 0e 2e 56 65 ee 02   4...k.9.`L..Ve..
    00a0 - 26 08 f8 ab 41 61 87 29-cc 62 e6 61 37 a2 bb c3   &...Aa.).b.a7...
    00b0 - 6d c1 01 b8 5f 6d 71 c0-97 81 92 9b 26 ce 67 7f   m..._mq.....&.g.

    Start Time: 1774360287
    Timeout   : 7200 (sec)
    Verify return code: 0 (ok)
    Extended master secret: yes
---
DONE

=== tls1_3 ===
depth=2 C = US, O = Google Trust Services LLC, CN = GTS Root R4
verify return:1
depth=1 C = US, O = Google Trust Services, CN = WE1
verify return:1
depth=0 CN = lmstudio.ai
verify return:1
CONNECTED(00000003)
---
Certificate chain
 0 s:CN = lmstudio.ai
   i:C = US, O = Google Trust Services, CN = WE1
   a:PKEY: id-ecPublicKey, 256 (bit); sigalg: ecdsa-with-SHA256
   v:NotBefore: Mar 10 10:10:08 2026 GMT; NotAfter: Jun  8 11:09:59 2026 GMT
 1 s:C = US, O = Google Trust Services, CN = WE1
   i:C = US, O = Google Trust Services LLC, CN = GTS Root R4
   a:PKEY: id-ecPublicKey, 256 (bit); sigalg: ecdsa-with-SHA384
   v:NotBefore: Dec 13 09:00:00 2023 GMT; NotAfter: Feb 20 14:00:00 2029 GMT
 2 s:C = US, O = Google Trust Services LLC, CN = GTS Root R4
   i:C = BE, O = GlobalSign nv-sa, OU = Root CA, CN = GlobalSign Root CA
   a:PKEY: id-ecPublicKey, 384 (bit); sigalg: RSA-SHA256
   v:NotBefore: Nov 15 03:43:21 2023 GMT; NotAfter: Jan 28 00:00:42 2028 GMT
---
Server certificate
-----BEGIN CERTIFICATE-----
MIIDlDCCAzugAwIBAgIRAPvMlTDeoCL0DskR/WEtUI4wCgYIKoZIzj0EAwIwOzEL
MAkGA1UEBhMCVVMxHjAcBgNVBAoTFUdvb2dsZSBUcnVzdCBTZXJ2aWNlczEMMAoG
A1UEAxMDV0UxMB4XDTI2MDMxMDEwMTAwOFoXDTI2MDYwODExMDk1OVowFjEUMBIG
A1UEAxMLbG1zdHVkaW8uYWkwWTATBgcqhkjOPQIBBggqhkjOPQMBBwNCAAQhfEyU
k16lUFXQ3HBWXghn9cW0A1lOb1gxk/xKKe6jHKTDYCcQbs3awZo6fuFBApRtNlF+
f3asmz8n3wspgQ9eo4ICQzCCAj8wDgYDVR0PAQH/BAQDAgeAMBMGA1UdJQQMMAoG
CCsGAQUFBwMBMAwGA1UdEwEB/wQCMAAwHQYDVR0OBBYEFEIgVXbqAf4VP8ZgS52o
5/wTs47zMB8GA1UdIwQYMBaAFJB3kjVnxP+ozKnme9mAeXvMk/k4MF4GCCsGAQUF
BwEBBFIwUDAnBggrBgEFBQcwAYYbaHR0cDovL28ucGtpLmdvb2cvcy93ZTEvLTh3
MCUGCCsGAQUFBzAChhlodHRwOi8vaS5wa2kuZ29vZy93ZTEuY3J0MBYGA1UdEQQP
MA2CC2xtc3R1ZGlvLmFpMBMGA1UdIAQMMAowCAYGZ4EMAQIBMDYGA1UdHwQvMC0w
K6ApoCeGJWh0dHA6Ly9jLnBraS5nb29nL3dlMS8wcmJBZ0czZ01nVS5jcmwwggED
BgorBgEEAdZ5AgQCBIH0BIHxAO8AdgBJnJtp3h187Pw23s2HZKa4W68Kh4AZ0VVS
++nrKd34wwAAAZzXcKFvAAAEAwBHMEUCIQDAZP3GeE43Y2t441xds/2j63ql8KXD
vIq6RjMN7zQ1vwIgR0w48e4I/gq4/dZXU9bqfDdgmO1MKf8WgiyKr6J+aWMAdQCW
l2S/VViXrfdDh2g3CEJ36fA61fak8zZuRqQ/D8qpxgAAAZzXcKGTAAAEAwBGMEQC
ID/cYHSYJP8t930h9Jkvmq/yKcFFqaM6kJbwK1720axZAiB2bfiDQb6KnwJlbdWk
Ku6HofdmAydy/Ve7RVjuVKUWqjAKBggqhkjOPQQDAgNHADBEAiBx4FtWD1LFsUV4
bZtlQo+U49NKP2dFOn+m7v9ngMRhugIgXDKKPurlZkg4NKWcyebS+nqQhsD8Ctae
g6wy2OqEdNQ=
-----END CERTIFICATE-----
subject=CN = lmstudio.ai
issuer=C = US, O = Google Trust Services, CN = WE1
---
No client certificate CA names sent
Peer signing digest: SHA256
Peer signature type: ECDSA
Server Temp Key: X25519, 253 bits
---
SSL handshake has read 2807 bytes and written 325 bytes
Verification: OK
---
New, TLSv1.3, Cipher is TLS_AES_256_GCM_SHA384
Server public key is 256 bit
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
Early data was not sent
Verify return code: 0 (ok)
---
DONE

Crawler stats

Basic stats
Total execution time23 s
Total URLs193
Total size110 MB
Requests - total time46 s
Requests - avg time243 ms
Requests - min time28 ms
Requests - max time1.4 s
Requests by status200: 156
307: 18
308: 1
404: 18

Analysis stats

Found 21 row(s).
Class::methodExec time 🔽Exec count
BestPracticeAnalyzer::checkHeadingStructure898 ms 174
BestPracticeAnalyzer::checkNonClickablePhoneNumbers842 ms 174
AccessibilityAnalyzer::checkMissingLabels747 ms 156
AccessibilityAnalyzer::checkMissingAriaLabels724 ms 156
SslTlsAnalyzer::getTLSandSSLCertificateInfo668 ms 1
AccessibilityAnalyzer::checkMissingRoles593 ms 156
AccessibilityAnalyzer::checkMissingLang500 ms 156
BestPracticeAnalyzer::checkMaxDOMDepth438 ms 174
BestPracticeAnalyzer::checkInlineSvg93 ms 174
BestPracticeAnalyzer::checkMissingQuotesOnAttributes43 ms 174
SeoAndOpenGraphAnalyzer::analyzeHeadings18 ms 1
AccessibilityAnalyzer::checkImageAltAttributes18 ms 156
SecurityAnalyzer::checkHtmlSecurity14 ms 174
SecurityAnalyzer::checkHeaders4 ms 174
SeoAndOpenGraphAnalyzer::analyzeSeo0 ms 1
SeoAndOpenGraphAnalyzer::analyzeOpenGraph0 ms 1
BestPracticeAnalyzer::checkTitleUniqueness0 ms 1
BestPracticeAnalyzer::checkMetaDescriptionUniqueness0 ms 1
BestPracticeAnalyzer::checkBrotliSupport0 ms 1
BestPracticeAnalyzer::checkWebpSupport0 ms 1
BestPracticeAnalyzer::checkAvifSupport0 ms 1
No rows found, please edit your search term.

Content processor stats

Found 12 row(s).
Class::methodExec time 🔽Exec count
NextJsProcessor::applyContentChangesBeforeUrlParsing691 ms 174
JavaScriptProcessor::findUrls598 ms 174
HtmlProcessor::findUrls305 ms 193
CssProcessor::findUrls26 ms 174
AstroProcessor::findUrls14 ms 174
AstroProcessor::applyContentChangesBeforeUrlParsing0 ms 174
NextJsProcessor::findUrls0 ms 174
JavaScriptProcessor::applyContentChangesBeforeUrlParsing0 ms 174
HtmlProcessor::applyContentChangesBeforeUrlParsing0 ms 193
SvelteProcessor::applyContentChangesBeforeUrlParsing0 ms 174
SvelteProcessor::findUrls0 ms 174
CssProcessor::applyContentChangesBeforeUrlParsing0 ms 174
No rows found, please edit your search term.

Crawler info

Version 2.1.0.20260317
Executed At 2026-03-24 13:51:05
Command siteone-crawler --url=https://lmstudio.ai/docs --markdown-export-dir=/tmp/siteone-lm_studio --markdown-exclude-selector=header,footer,nav,.sidebar,.menu,.breadcrumb,script,style --timeout=30 --workers=5 --disable-javascript --disable-styles --disable-fonts --disable-images --disable-files --no-color --hide-progress-bar --output=text --include-regex=/docs/
Hostname ubuntu-8gb-hel1-1
User-Agent Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/26.0.0.0 Safari/537.36 siteone-crawler/2.1.0.20260317