• Thread Author
I pulled a boxed Windows 11 tiny PC out of its packaging, installed Ubuntu Budgie, and in less than an afternoon turned a handsome, pocket-sized Geekom IT15 into a fast, dependable Linux workstation — a change that proved more than cosmetic: it materially improved daily responsiveness, fixed some Windows-only connectivity quirks, and highlighted the realistic limits of running local AI without a discrete GPU.

'Geekom IT15 on Linux: fast Ubuntu Budgie workstation, AI limits explained'
A compact mini PC with multiple USB ports sits on a desk in front of a colorful backlit monitor.Background / Overview​

The Geekom IT15 is one of the most compact high‑performance mini PCs available: roughly 4.5 x 4.5 x 2 inches, a metal chassis with VESA mounting, and a modern I/O stack built around Intel’s Arrow Lake Series 2 hardware. Retail configurations commonly include the Intel Core Ultra 9‑285H, an Intel Arc 140T integrated GPU, dual‑channel DDR5 memory (typical builds ship with 32 GB), and a 2 TB PCIe Gen4 NVMe SSD — a rare combination of small size and high throughput in a consumer mini PC. Reviews and hands‑on writeups make the same headline claims: excellent desktop responsiveness for everyday work, strong build quality, and sensible expansion for a tiny chassis. (neowin.net, liliputing.com)
On paper, the platform also carries explicit AI marketing: Intel calls the chip’s NPU subsystem “Intel AI Boost” with an NPU peak of 13 TOPS (INT8) and advertises combined CPU+GPU+NPU TOPS figures when describing AI capabilities. Those numbers are real as metrics, but they don’t automatically translate into fast local LLM inference — more on that later. The Intel spec sheet confirms the NPU figure and the Chip’s core counts and clocks. (intel.com)
Why this matters: the IT15 sits at the intersection of two currents modern buyers care about — compact form factor and AI‑aware marketing. What it demonstrates in practice is how the operating system and workload mix determine whether that hardware is a true daily driver or a boxed promise.

What the ZDNET tester actually did — and what they found​

ZDNET’s hands‑on review describes shipping the IT15 with Windows 11, doing the usual unboxing and setup, then deliberately dual‑booting Ubuntu Budgie instead of wiping the Windows install. The reviewer treated the IT15 as a primary desktop, installed local LLM tooling (Ollama and the Msty wrapper), and used a small model (gemma3:1b) to probe local AI behavior. The headline result: everyday desktop tasks (browsing, office apps, software installs, animations) ran better and felt snappier under Linux, while local LL M inference remained usable but noticeably slower because the machine relies on CPU and the integrated Intel GPU rather than an Nvidia CUDA GPU.
Those findings are consistent with other early IT15 reviews: the CPU is capable and desktop snappiness is excellent, but AI inference performance without an NVIDIA GPU is a pragmatic limit for anyone expecting near‑instant token generation on mid‑to‑large models. (neowin.net, liliputing.com)

Hardware deep dive: what’s actually inside the IT15​

  • CPU: Intel Core Ultra 9‑285H — a 16‑core Arrow Lake Series 2 part with mixed performance and efficiency cores and a max turbo frequency up to ~5.4 GHz depending on configuration. The chip supports Intel AI Boost and an NPU with a 13 TOPS (INT8) peak for on‑chip acceleration. (intel.com)
  • GPU: Intel Arc 140T integrated Xe graphics. This is not a discrete Nvidia GPU — it’s Intel’s modern Xe architecture integrated GPU that supports features like XeSS and limited ray tracing. Integrated Arc graphics help with UI acceleration, light GPU tasks, and some gaming at modest settings, but they are not a substitute for high‑VRAM NVIDIA cards for large model inference. (liliputing.com)
  • Memory & Storage: dual SODIMM DDR5 (typical retail builds: 32 GB at DDR5‑5600; board supports higher capacities), and a single M.2 2280 PCIe Gen4 x4 slot populated with a 2 TB NVMe drive in the reviewed unit. (liliputing.com, neowin.net)
  • Connectivity: USB4 / USB‑C ports, two HDMI outputs, front USB‑A ports, SD card slot, 2.5 GbE Ethernet, and Intel M.2 Wi‑Fi 7 modules in many SKUs. The presence of USB4 gives options for docking and external peripherals, though eGPU support depends on vendor wiring and firmware. (neowin.net, itpro.com)
  • Cooling & chassis: metal inner frame, PC+ABS outer case, small active cooling fan. Reviewers praise build quality but note the fan can become audible under sustained heavy loads. Some early reviews highlight thermal throttling potential when a high‑power H‑series CPU is squeezed into a tiny chassis. (technetbooks.com, neowin.net)

What Linux changed — practical differences you’ll notice​

The ZDNET reviewer’s day‑to‑day Linux experience broke down into repeatable wins and a few caveats:
  • Faster feel: reduced background services and a lighter desktop (Ubuntu Budgie in the review) produced faster app launches and smoother UI responsiveness compared with the stock Windows 11 setup the machine shipped with.
  • Fixes for connectivity/display quirks: the reviewer reports tangible improvements in Wi‑Fi reliability and resolution of USB‑C display flicker problems after switching to Linux — a pattern echoed in other community reports where vendor Windows drivers lag or behave inconsistently while Linux kernel drivers are mature or behave differently. Those gains aren’t universal — your mileage will vary by Wi‑Fi card revision, firmware, and monitor — but the IT15 example demonstrates that a platform‑swap can sometimes rescue a flaky out‑of‑the‑box Windows experience.
  • Audio and peripheral caveats: the reviewer encountered occasional audio issues that required manual tweaks (reloading ALSA modules, toggling audio servers). These are typical of early‑adopter setups and generally solvable with community troubleshooting.
  • Everything “just worked” otherwise: driver support for the core CPU, Arc iGPU, and mainstream peripherals was good enough to treat the IT15 as a daily Linux driver immediately after install. Independent reviews also confirm broad out‑of‑the‑box Linux compatibility for Arrow Lake hardware in modern kernels. (liliputing.com, itpro.com)

Local LLMs on the IT15: what the tests reveal, and why the GPU matters​

The IT15’s local LLM test is emblematic of the current local‑LLM landscape: small models run on CPU, but GPUs radically change latency and throughput.
  • ZDNET ran Ollama with gemma3:1b on the IT15 and found the CPU‑only output slower than what you’d expect with an NVIDIA GPU — output was detailed and usable, but not instant. This is exactly what should be expected when inference is CPU bound and the NPU/GPU combination does not have the memory bandwidth or framework support that a CUDA‑accelerated stack provides.
  • Benchmarks and community testing demonstrate the scale of the difference: for many LLMs, a midrange NVIDIA GPU (e.g., RTX 3060/3070 class) can yield multi‑fold improvements in token/s generation versus CPU only — loading times drop from minutes to seconds and generation latency falls from many seconds to sub‑second ranges for small models. Community benchmarks and in‑depth writeups consistently show GPUs are purpose‑built for the parallel matrix math underpinning modern LLMs. (dev.to, arsturn.com)
  • Practical takeaway: the IT15 can run lightweight models and smaller quantized LLMs effectively on CPU (or on the iGPU for some tasks), but it’s not a substitute for a discrete CUDA GPU when you need fast, low‑latency inference with mid‑to‑large LLMs. Use cases like local prototyping, small chatbots, or low‑volume scripting prompts are realistic; conversational production workloads or rapid token generation are not.

Decoding the TOPS marketing: what “13 TOPS” actually means​

Intel’s “13 TOPS” NPU figure is an honest hardware spec — it reflects theoretical integer operations per second capacity for the dedicated neural accelerator slice on the chip. But it is:
  • A peak metric measured in a laboratory profile, not an end‑to‑end model performance guarantee.
  • Not the same as usable LLM throughput under real frameworks and model runtimes, because model inference performance depends on memory bandwidth, model quantization, framework support (Vulkan, OpenVINO, DirectML, TRITON, etc.), and how well software maps computation onto the NPU tile.
Multiple reviewers caution that combining CPU+GPU+NPU theoretical TOPS to advertise “99 TOPS total” is a marketing shorthand that mixes heterogeneous resources; it does not mean a single LLM will run at the combined rate. Treat TOPS as a comparative capability metric, not a promise of model latency or throughput. (intel.com, liliputing.com)

Thermals, noise, and sustained workloads​

Packing a high‑power H‑series mobile CPU into a tiny enclosure is a design trade‑off. Independent tests and early user reports highlight two practical considerations:
  • Under sustained, heavy CPU loads (long‑running compiles, multi‑threaded encoding, or prolonged LLM inference), the IT15’s single fan can become audible and the CPU may thermally throttle to keep temperatures in check. This affects consistent peak performance. (technetbooks.com, neowin.net)
  • For occasional bursts and normal desktop usage the machine is fine; for long sustained high TDP workloads a larger chassis or active cooling solution will typically sustain higher throughput.
If you’re planning heavy local model inference, consider either a machine with a larger thermal envelope or a proper discrete GPU workstation (or cloud inference).

Who should buy a Geekom IT15 — recommended workloads​

  • Ideal for:
  • Productivity desktops with ultra‑small footprints.
  • Developers who want a compact Linux workstation capable of building, testing, and running local dev servers.
  • Creators who need a small, VESA‑mounted machine for edit/playback and general content work (not heavy 3D rendering).
  • Users who want to experiment with local LLMs on small models and value privacy/latency for light tasks.
  • Not ideal for:
  • Users who expect fast local inference on mid‑to‑large LLMs without using cloud services or external GPUs.
  • Heavy gaming at high resolution and frame rates (Arc 140T is capable but limited).
  • Sustained, high‑TDP server workloads that require prolonged full‑power operation.
The price‑to‑performance ratio looks attractive for a mini PC targeted at creators and power users who prioritize space efficiency over raw GPU horsepower. Independent coverage and the ZDNET hands‑on both conclude the IT15 overdelivers for day‑to‑day desktop use while being honest about AI limits. (neowin.net, itpro.com)

Practical recommendations: how to get the most from an IT15 as a Linux workstation​

  • Backup Windows (if present) and create recovery media before altering partitions.
  • Use a recent Linux distribution (Ubuntu 24.04 or a current Arch/Fedora build) with a modern kernel; Arrow Lake and Arc drivers benefit from up‑to‑date kernels. (liliputing.com)
  • Install in dual‑boot if you want to compare Windows behavior or preserve vendor firmware tooling.
  • If you plan to run local LLMs:
  • Pick small, quantized models that fit into main memory or your targeted acceleration stack.
  • Use the latest runtimes (OpenVINO, DirectML, Vulkan backends or Ollama’s GPU support) and tune context/window sizes for sensible tradeoffs between memory and latency. Community guides show big performance wins from context length tuning. (dev.to, arsturn.com)
  • Monitor thermals and limit sustained heavy CPU loads; consider elevating the machine for better airflow or using a small, quiet USB fan for long sessions.
  • If local LLM throughput matters, plan for a hybrid approach: local for small tasks, cloud or a discrete GPU server for heavy inference.

Risks and limitations — what to watch out for​

  • Driver and firmware quirks: while Linux worked well on the ZDNET tester’s unit, Wi‑Fi, USB‑C display, and audio exhibited issues that required manual fixes in some cases. These are solvable but require time and comfort with Linux troubleshooting.
  • Thermal throttling under continuous full‑power loads can reduce long‑term performance; the chassis is small and cooling is constrained. (technetbooks.com)
  • AI expectations vs. reality: marketing TOPS figures can create unrealistic expectations for local LLM performance. The IT15’s 13 TOPS NPU helps some workloads but is not a substitute for the memory bandwidth and CUDA ecosystem of NVIDIA GPUs when running larger language models. Cross‑reference the Intel spec page and independent reviews before assuming a specific LLM will run at a given latency. (intel.com, liliputing.com)
  • eGPU and expansion caveats: while USB4 gives options for docking, eGPU support varies by vendor firmware and may not be a drop‑in solution for high‑performance GPU offload.

Final analysis — an honest verdict​

The Geekom IT15 is, in the context of small form‑factor PCs, one of the best practical compromises available today: great desktop responsiveness, excellent connectivity for its size, and a build that makes it easy to convert into a Linux workstation. The ZDNET hands‑on shows that converting it to Ubuntu Budgie yields noticeable, practical improvements in daily use — faster app launches, fewer flaky Windows driver headaches, and a genuinely capable small desktop experience.
At the same time, the platform’s marketing about AI should be read with nuance. Intel’s 13 TOPS NPU is a real capability and the Arc integrated GPU is a meaningful upgrade over older integrated graphics, but neither replace a discrete, CUDA‑enabled NVIDIA GPU when it comes to high‑throughput, low‑latency LLM inference. If your primary requirement is rapid local inference of mid‑sized or large models, buy with that constraint in mind and plan a hybrid approach (cloud for heavy inference, local for development and privacy‑sensitive small tasks). (intel.com, dev.to)
For Windows users curious about Linux: the IT15 is a compelling conver sion candidate. The hardware is modern enough that Linux support is good out of the box, the performance gains for general desktop work are noticeable, and the small chassis plus VESA mount make it an excellent option for uncluttered desks or multi‑monitor setups. If you need to run heavy AI inference locally, pair the IT15 with a dedicated GPU host or choose a different chassis designed for discrete graphics. (neowin.net, itpro.com)

The Geekom IT15 proves a broader point about modern PCs: operating system and workload matter as much as raw silicon. In the right hands and with the right expectations, a tiny Intel‑powered mini PC can be a fast, quiet, and efficient Linux workstation — but the current realities of local LLM workloads mean you should plan architecture around the tasks you actually need to run, not just the peak TOPS on a spec sheet.

Source: ZDNET I converted this Windows 11 mini PC into a Linux work station - and didn't regret it
 

Last edited:
Back
Top