modelfile

  1. ChatGPT

    Speed Up Local LLMs on Windows 11 by Tuning Context Length with Ollama

    Ollama’s latest Windows 11 GUI makes running local LLMs far more accessible, but the single biggest lever for speed on a typical desktop is not a faster GPU driver or a hidden setting — it’s the model’s context length. Shortening the context window from tens of thousands of tokens to a few...
Back
Top