Intel NUC 9 Extreme “Ghost Canyon” Kit – NUC9i9QNX Review

NUC9i9QNX Review

I’ve previously written about Intel’s (relatively) new NUC 9 range of mini PCs and now I am following up with my experiences of having bought one.

Whilst I’ll cover some performance metrics from both Windows and Ubuntu I’ll also discuss the benefits and drawbacks of using either OS together with a comparison of gaming, thermals, and power usage as well as a brief look at overclocking potential and implications together with highlighting the issues encountered.

NUC9i9QNX Hardware Overview

The model I purchased and will be reviewing here is the NUC9i9QNX from Intel’s Ghost Canyon lineup and is formally known as the Intel NUC 9 Extreme Kit – NUC9i9QNX. It contains a “Compute Element” with an i9-9980HK which is an eight-core 16-thread 2.40 GHz processor boosting to 5.00 GHz with Intel’s UHD Graphics 630. The full specifications of the NUC9i9QNX include:

NUC9i9QNX specifications
Click to Enlarge

The NUC9i9QNX is sold as a kit which essentially means barebones as it consists of a case containing a baseboard, power supply, and the Compute Element pre-installed:

Intel NUC9 Kit

By removing the top and a side panel of the NUC the Compute Element is accessible allowing its front panel to be removed to access the M.2 and memory slots:

compute element

I wanted to use the mini PC as a dual-boot device with Ubuntu for development and running multiple VMs as well as Windows for WSL and occasional gaming.

So I purchased a pair of 32GB (i.e. 64GB in total) Team Elite 260-Pin SODIMM DDR4 3200MHz laptop memory. I also purchased an XPG SX8200 Pro 2TB NVMe and decided to reuse an existing Samsung 970 EVO 1TB NVMe for the two M.2 2280 SSDs in the Compute Element.

There is another M.2 slot but that requires removing the Compute Element first:

Intel NUC9 M.2 Slot

Into the blue PCIe slot I also purchased and installed an EVGA GeForce RTX 2060 KO ULTRA GAMING graphics card:

EVGA GeForce RTX 2060 KO-ULTRA GAMING graphics card

as I wanted to see how ray tracing improved the visual experience plus I was limited in my choice of graphics cards due to the maximum length supported by the NUC9i9QNX (i.e. 202mm or approximately 8 inches).

In terms of accessibility, installing the M.2 drives and memory was relatively straightforward albeit a little fiddly given space is somewhat restricted. The graphics card however is a tight fit. The hardest part is connecting the power cable to the card and fitting the cables around the card and into the small space for cable management. I recommend first removing the power and USB cables from the top of Compute Element which gives you more room to work and allows the unused PCIe 6+2 pin power cable to be bent backwards and downwards parallel to itself:

cable management

Packaging Contents

Besides the NUC, you also get a large hard-plastic carrying case complete with a strap with metal latches that comes enclosed in a “soft-to-touch” wrap-around cardboard box with magnetic catch. Inside, in a hidden compartment in the carrying case is a power cable and the outer box includes a quick start guide, a regulatory information sheet, and a safety information sheet. What you don’t get is the blacklight torch that all the reviewers talked about.

 

Review Methodology

When reviewing mini PCs I typically look at their performance under both Windows and Linux (Ubuntu) and compare against some of the more recently released mini PCs. Specifically, I am now reviewing using Windows 10 version 2004 and Ubuntu 20.04 LTS and test with a selection of commonly used Windows benchmarks and/or equivalents for Linux together with Thomas Kaiser’s “sbc-bench” which is a small set of different CPU performance tests focusing on server performance when run on Ubuntu. I also use “Phoronix Test Suite” and now benchmark with the same set of tests on both Windows and Ubuntu for comparison purposes. On Ubuntu, I also compile the v5.4 Linux kernel using the default config as a test of performance using a real-world scenario.

Prior to benchmarking, I perform all necessary updates and/or installations to run the latest versions of both OSes. I also capture some basic details of the mini PCs under review for each OS.

Installation Issues

I first installed Windows to the XPG SX8200 Pro 2TB NVMe M.2 drive. In my particular case, because the Samsung 970 EVO 1TB NVMe M.2 drive already contained an Ubuntu 18.04 installation and, more importantly, an EFI partition, the Windows installation simply reused the EFI partition for the Windows boot files. Installing Windows was straightforward although I did notice that Cortana didn’t speak but that was probably due to drivers not being initially installed. Running Intel’s “Driver & Support Assistant” updated the ethernet and wireless drivers after all other updates had been applied:

NUC9i9QNX Intel Drivers
Click to Enlarge

Installing the Windows benchmarks went smoothly with the exception of the “Selenium” test from the “Phoronix Test Suite”. When running the test with “Chrome” selected it errors with “cannot find Chrome binary”:Phoronix Fail Chrome Binary

As a result, the Octane tests were run manually and edited into the final results.

Prior to installing Ubuntu 20.04, I shrunk the XPG SX8200 Pro 2TB NVMe M.2 drive by half to allow 1TB for Windows and 1TB for Ubuntu. The Ubuntu installation required booting the “liveUSB” using “safe mode” otherwise it simply freezes with a black screen and a couple of multicolored lines at the top. Unfortunately once successfully booted the screen is then restricted to an 800×600 resolution with very large text. After starting the installation, by moving the window up the screen as much as possible you can just see the top of the selection boxes at the bottom of the screen. Each screen’s “Continue” box is the one on the bottom right and can just be clicked to proceed.

It is easiest to select “Install third-party software for graphics” to add the additional Nvidia 440 drivers required by the RTX 2060 graphics card and configure secure boot then “Enroll MOK” on reboot noting that you will now get a “Booting in insecure mode” message as GRUB loads.  Again the Samsung 970 EVO 1TB NVMe M.2 drive’s EFI partition was reused for the Ubuntu boot configuration.

The key issue however with Ubuntu 20.04 (and not found with Ubuntu 18.04) was that booting hangs and “dmesg” fills up with errors. I found I needed to add “snd_hda_intel.dmic_detect=0” as a kernel parameter in order to boot so I added this to “/etc/default/grub”:

Intel NUC9 Ubuntu Grub Configuration

Windows Performance

I installed Windows 10 Pro version 2004 and activated it with a genuine license key and updated it to OS build 19041.450. A quick look at the hardware information shows:

I then ran my (2020) standard set of benchmarking tools to look at performance under Windows:

This highlighted my first issue with the NUC9i9QNX namely the NVMe M.2 SSD performance and at this point, I need to pause the Windows performance discussion to expand on this issue.

Because I saw that the XPG M.2 performance was slower in the NUC9i9QNX than I had seen when I first tested the drive in a different PC immediately after purchasing it, I re-tested the M.2 in another PC just to reconfirm it worked to its specification:

xpg-in-another-pc

I then reinstalled it back into the NUC9i9QNX and added an Ubuntu partition to it and then tested the drive using “fio”. I got similarly poor results. I then removed the drive and put it in my NUC7i7DNHE and tested it again both Windows and Ubuntu:

xpg ssd comparison in windows and ubuntu

Finally to establish that it wasn’t just that drive at issue in the NUC9i9QNX I also tested the Samsung M.2 drive in Ubuntu on both the NUC9i9QNX and NUC7i7DNHE:

samsung SSD comparison in ubuntu

As the read speeds were consistently slower on the NUC9i9QNX I raised a support ticket with Intel. Their initial response was not promising as after asking for more information and to run Intel’s System Support Utility for Windows they responded with:

I am writing to let you know that according to the information provided, the Configured Clock Speed (3200 MHz) that is used is not supported for this model.

Please check the proper memory specifications here. ( Memory Types DDR4-2666 ).
https://ark.intel.com/content/www/us/en/ark/products/190107/intel-nuc-9-extreme-kit-nuc9i9qnx.html

This actually highlights another issue as when I was first diagnosing the M.2 speed issue the memory was only running at 2666 MHz by default and there was no XMP setting in the original BIOS (version 34):

memory running at 2666mhZ

I had then subsequently updated the BIOS to the latest version 54 in the hope it would fix the M.2 issue which it didn’t. It did however automatically set the speed of the memory to 3200 MHz although there was still no setting in the BIOS to manually change the speed or set XMP:

bios missing xmp

Ironically Intel also provided a link so I could “Find Compatible Components for this NUC” which actually shows several memory entries with speeds of 3200 MHz as being “Intel validated”:

intel validated memory at 3200 MHz
Click to Enlarge

which seemed to contradict their statement. Furthermore, I had only purchased the memory having consulted this page prior to ordering the NUC9i9QNX in the first place.

So I replaced the memory with two sticks of 8GB SODIMM DDR4 2666 MHz which unsurprisingly didn’t fix the issue and as a result, Intel treated the ticket as a warranty issue. After I shipped the NUC9i9QNX to them in Malaysia using their pre-paid DHL service I received a full refund which I used to buy the replacement NUC used in the testing above. And yes, the NVMe speeds are still the same indicating there is a more fundamental problem perhaps likely to be a BIOS related issue. For those interested the whole refund process took nearly a month however Intel were very responsive throughout with daily updates. Now back to the rest of the performance metrics.

For my specific set of Phoronix Test Suite tests the results were:

NUC9i9QNX windows phoronix overview

It is interesting to compare these results against the two Intel Gemini Lake NUCs and the AMD Ryzen 5 Beelink GT-R as these are indicative of the performance from recent mini PCs and it shows how far off they are in terms of performance from the significantly more powerful NUC9i9QNX.

Gemini Lake, AMD Ryzen 5 vs Coffee Lake Benchmarks

NUC9i9QNX Performance in Ubuntu

After shrinking the Windows partition in half and creating a new partition I installed Ubuntu using an Ubuntu 20.04 ISO as dual boot. After installation and updates, the key hardware information is as follows:NUC9i9QNX Ubuntu 20.04


 

ubuntu 20.04 gpu info

I then ran my Linux benchmarks for which the majority of the results are text-based but the graphical ones included:

Intel NUC9 Geekbench 5 score

NUC9i9QNX ubuntu heaven

And for the same set of Phoronix Test Suite tests the results were:

NUC9i9QNX ubuntu pts overview

The complete results together with a similar comparison against the other mini PCs are:Gemini Lake vs AMD Ryzen 5 vs Core i9 Coffee Lake ubuntu mini pcs

As previously noted the NVMe M.2 SSD performance is poor with a similar result to Windows.

Interestingly Intel’s Supported Operating Systems for Intel NUC Products (Article ID 000005628) seems to have been updated to remove Ubuntu support for the NUC9i9QNX:

Intel-NUC9-Supported-OS-AU
Intel AU website
Intel-NUC9-Supported-OS-US
Intel US website

Browsers & Kodi

For real-world testing, I played some videos in Edge, Chrome, and Kodi on Windows and in Firefox, Chrome, and Kodi on Ubuntu. The following tables summarise the tests and results for each of web browsing, Kodi in general and Kodi playing specific videos:

Hardware acceleration is not supported under Ubuntu for Nvidia graphics for decoding VP9 and 10-bit HEVC (H.265)  which explains why software decoding was used and for the occasional skipped frames.

I also tried playing an 8K video in Kodi on both OS. In Windows there were no issues other than some initially skipped frames however with Ubuntu using software for decoding meant there were occasional frame skips:

Ubuntu kodi 8K Video

Gaming with Intel NUC9 Kit

I tested three games under Steam natively in Windows and in Ubuntu using Proton 5.0 (Counter-Strike: Global Offensive, Grand Theft Auto V, and Shadow Of The Tomb Raider) at 1920×1080 resolutions using the highest settings:

NUC9i9QNX gaming

For Ubuntu I had to rely on Steam’s in-game FPS counter as I did not have tools equivalent to MSI Afterburner/Rivatuner available:

windows csgo NUC9i9QNX Kit
Windows
ubuntu gaming csgo NUC9 kit
Ubuntu

However, the results can be visually verified with the in-game benchmark from Shadow Of The Tomb Raider:

windows sottr settings
Windows
ubuntu sottr settings
Ubuntu

Whilst the results were similar, overall using Windows is better for gaming performance.

NUC9i9QNX Power Consumption

Power consumption was measured as follows:

  • Powered off – 1.0 Watts (Windows) and 1.6 Watts (Ubuntu)
  • BIOS*  – 61.7 Watts
  • GRUB menu – 55.3 Watts
  • Idle** – 29.4 Watts (Windows) and 22.8 Watts (Ubuntu)
  • 4K Video playback*** – 45.1 Watts (Windows) and 65.8 Watts (Ubuntu)
  • Gaming benchmark*** – Up to 280-290 Watts (Windows) and up to 250-260 Watts (Ubuntu)
  • CPU stressed – 145.5 Watts then drops to 98.0 Watts (Ubuntu)

* BIOS (see below).
** Idle is when the fans are not running.
*** The power figures fluctuate so the value is the average of the median high and median low power readings. The in-game benchmark from Shadow Of The Tomb Raider was used and the maximum power draw occurred during the market scene near the end of the benchmark. The 4K video power draw in Ubuntu was higher than Windows due to software decoding of the VP9 codec in Ubuntu whereas Windows used hardware acceleration.

Thermals and Noise

Perhaps the most interesting aspect of the NUC9i9QNX is the thermal performance and consequential fan noise.

Whilst the Compute Element includes a small fan and the graphics card includes two fans, the top of the NUC also has a pair of 80mm fans:

NUC9 Fans

all of which will contribute to noise under load.

I decided to measure package temperature and maximum core frequency using Intel’s Extreme Tuning Utility on Windows as it both gave me a visual graph of the metrics and allowed me to log them to a file. For Ubuntu, I created a logging script that captured similar information from key files under “/proc” and “/sys”. The room temperature was 20.3°C on the day I tested on Windows and 18.4°C the following day when I tested on Ubuntu.

I ran three comparable processes in each OS whilst collecting the logs and then afterward I aligned each log file for each OS and plotted the results.

Playing a 4K 60 FPS YouTube video in Chrome:

NUC9i9QNX Temperature Chart chrome os comparison

This clearly shows the difference of hardware acceleration being used by Windows vs software decoding by Ubuntu. Because of the higher processor load, Ubuntu runs hotter and the processors are clocked lower as a consequence. The average package temperature in Ubuntu is 66.1°C but just 50.0°C in Windows whereas the average maximum core frequency is lower in Ubuntu at 4.3408 compared to 4.4451 in Windows.

Geekbench 5 CPU benchmark:

geekbench 5 cpu temperature windows vs ubuntu

The split between the single-core and multi-core benchmarks is also evident. Ubuntu maintains a higher average maximum core frequency in both benchmarks whilst maintaining a lower average package temperature. For the single-core benchmark, the average package temperature in Ubuntu is 56.6°C and slightly higher in Windows at 58.3°C with the average maximum core frequency in Ubuntu at 4.9451 but lower in Windows at 4.7951. For the multi-core benchmark whilst the average package temperature is the same at 67.1°C in both Ubuntu and Windows the average maximum core frequency in Ubuntu is higher at 4.4229 compared to 4.2152 in Windows.

Running the in-game benchmark from Shadow Of The Tomb Raider:

NUC9i9QNX thermal gaming sottr os comparison

The average FPS in Windows is 10 frames higher than in Ubuntu and as a result, the average package temperature in Windows is 74.4°C compared with 67.7°C in Ubuntu which also results in a lower average maximum core frequency in Windows of 4.1933 compared with 4.2677 in Ubuntu. If you take into consideration the power usage (see above) it can be seen that Windows is working much harder to achieve those extra frames.

This is also when the fans become really noticeable. When the device is idle, the fans are off leaving just the ambient room noise measuring around 33 dBA. When the fans are just “ticking” over they are barely audible at around 36-37 dBA. However during the benchmark as the load and therefore CPU temperature rises the fans ramp up and in the market scene of the benchmark where the average package temperature starts to exceed 80°C the combined fan noise reaches 55-56 dBA:

market scene

Additionally, I ran two CPU intensive tests to see the effectiveness of thermal cooling. The first was Cinebench R20 in Windows:

NUC9i9QNX Cinebench Benchmark

where I ran both the multi-core and single-core tests in one run. Of interest is the start of the single-core test where the maximum core frequency jumps from 3.1459 to 4.7429 and the package temperature suddenly peaks at 100°C which immediately forces the maximum core frequency down to 4.2132 before it stabilizes at around 4.7455 with an average package temperature of 79.1°C in the central section of the overall run. Note that 100°C is TJunction which is the maximum temperature allowed at the processor die and arguably not something you should be reaching given stock settings.

The second test was a stress test in Ubuntu:

ubuntu stress NUC9i9QNXwhere you can see the temperature rising during the “Tau” or “Turbo Boost Power Time Window” that PL2 can be sustained (28 seconds). During this period the package temperature rose to 91°C which is just about at the maximum you would want to see.

Despite the high temperature, the fans are effective and the CPU cools down very quickly once no longer under load.

Overclocking

The NUC9i9QNX’s i9-9980HK is an unlocked processor meaning that it supports overclocking. However, I’ve already shown that with the stock configuration the temperatures already run relatively high and close to the thermal limits. Rather than increase the clock speeds and therefore generate even more heat, I decided to try undervolting the CPU to allow it to boost higher or longer within the same power restrictions.

First, I experimented dropping the “Core Voltage Offset” whilst running Cinebench R20 multi-core. I settled on -0.150 V as this was stable. I then increased the value for “Tau” to a maximum of 128 seconds. Using this simple “overclock”:

cinebench r20 overclock settings

I was able to significantly improved the scores from 3375:

to 4119:

overclocked NUC9i9QNX cinebench r20 multi

although this did come with a 15°C temperature increase:

cinebench r20 NUC9i9QNX overclock comparison
Click to Enlarge

 

and also note how the elapsed time of the test is now shorter.

I also experimented increasing the value for PL1 from 65 W to 100 W whilst using Blender as my benchmark. With default settings the “BMW” benchmark took 4 minutes 27 seconds:

Blender BMW on NUC9i9QNX Kit

However with a -0.150 V undervolt, a Tau of 128 seconds and a PL1 of 100 W:

blender bmw settings
Click to Enlarge

I achieved a significant improvement to 3 minutes 36 seconds:

overclocked NUC9i9QNX-blender bmw benchmark

Again with the temperature averaging around 88°C for a substantial part of the test it is more indicative of the limits of overclocking rather than a practical setting for daily use:

blender overclocked NUC9i9QNX Temperature Chart

Just for reference, when running the same Blender benchmark in Ubuntu without any overclocking it only took 3 minutes 31 seconds:

NUC9i9QNX ubuntu blender bmw

Windows vs Ubuntu in NUC 9 Kit

Whilst a detailed comparison between the two operating systems is beyond the scope of this review, it is worth noting some of the key findings I observed. First looking at the performance tools common between the two systems:

NUC9i9QNX Windows vs Ubuntu-os-performance comparisonOverall, Ubuntu performs better in the majority of the benchmarks than Windows with the exception of Unigine Heaven. A good example is the Blender benchmark above where Windows needed overclocking to match the Ubuntu result.

However, as demonstrated by the gaming performance under Windows and in video playback which is limited by the Nvidia graphics not fully supporting all codecs for hardware acceleration in Ubuntu, Windows the OS of choice for these types of activities.

Power consumption is higher in Windows when gaming although watching videos is best performed in Windows where power consumption is lower and without frame skips.

Finally, there are a lot of tools in Windows to support overclocking of both the CPU and GPU, and due to the limited functionality provided by the BIOS overclocking is easier to perform in Windows, at least for a novice.

Networking

Network connectivity throughput was measured on Ubuntu using “iperf”:

NUC9i9QNX network throughput WiFi EthernetThe wifi results are excellent due to Wi-Fi 6 and the 5.0 GHz speeds were similar to ethernet speeds although it must be noted that the router was only a meter away from the NUC.

BIOS

The BIOS is quite restricted. There isn’t any control over setting the memory speed which defaults to its highest setting. Overclocking is also very limited with just a few power limit settings. A brief overview is available in the following video:

NUC9i9QNX Cable Management

As alluded to earlier, cable management is reasonably challenging although not dissimilar to building in mini-ITX cases. However, care needs to be taken to ensure all cables remain connected when adding or removing the GPU or Compute Element. When I first tested the replacement NUC9i9QNX I didn’t realize that the SDXC card was not working. I had noticed that the “Generic MassStorageClass” device occupying “/dev/sda” in Ubuntu was missing without realizing the implication:

ubuntu-missing-sdcard ubunut-working-sdcard

Stripping down the NUC9i9QNX again I saw that the lower USB cable appeared not to be fully inserted:

NUC9i9QNX sd card cable disconnected

Reseating the cable fixed the issue.

Final Observations

Much has been said regarding the price of NUC 9 devices. However, recently I’ve seen substantial discounts being offered by local resellers. For example, my first NUC cost AUD 2399 whereas my replacement only cost AUD 2099. Now at the time of writing some local resellers are offering it for AUD 1884. Such a fluctuation in prices cannot be a good sign and obviously unfair to early adopters.

Being cheaper also does not necessarily overcome the NUC’s current shortcomings: incorrect NVMe speeds, inability to control the memory speed, issues with running Ubuntu 20.04, and inability to overclock through the BIOS all substantially detract from making this device recommendable.

There are also several unknowns about the whole Compute Element concept. Intel has dropped the compute stick and the compute card and so doesn’t have a particularly strong track record for “compute” products. Other issues to consider are whether AIB partners will continue to release the required shorter graphics cards. Or whether the current 500 W power supply will be sufficient to support new graphics cards like the RTX 3000 series.

The concept of replacing the Compute Element with a more updated version after a period of time may not be realistic if everything else has become outdated like cooling or cable connectors and there is always the possibility that AMD will offer a more attractive alternative.

Despite these negatives, the form factor is still likable and the device is well engineered. Overall this is a powerful mini PC and the flexibility of adding a discrete graphics card overcomes the primary limitation of the current mini PC form factor. If the highlighted issues can be fixed then this will be an exceptional mini PC.

Share this:

Support CNX Software! Donate via cryptocurrencies, become a Patron on Patreon, or purchase goods on Amazon or Aliexpress

ROCK 5 ITX RK3588 mini-ITX motherboard
Subscribe
Notify of
guest
The comment form collects your name, email and content to allow us keep track of the comments placed on the website. Please read and accept our website Terms and Privacy Policy to post a comment.
7 Comments
oldest
newest
crashoverride
crashoverride
4 years ago

While I did not read the article in its entirety (TLDR), the screen shot of your graphics card may yield a clue as to the nature of the slow NVMe SSD issue. The graphics card is reported as PCIe 3.0 capable (8 GT/s), but the link established is PCIe 1.0 (2.5 GT/s).

You can check the PCIe link speed of your NVMe SSD in Linux using the ‘sudo lspci -vv’ command. Search for “LnkSta:” in the output.

Linuxium
4 years ago

Thanks. However the NVMe SSD Controller shows:

linuxium@NUC9i9QNX:~$ sudo lspci -s 6f:00.0 -vvv | grep LnkSta
LnkSta: Speed 8GT/s, Width x4, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+, EqualizationPhase1+

dgp
dgp
4 years ago

Numbers are nice and all but I think we all know what people need to know.. does that skull have tasteless RGB LED effects?

Linuxium
4 years ago

No as the skull is just “painted” on the side panels.

linuxium
4 years ago

The carrying case mimics the RGB LED effects when viewed under ultraviolet light … see https://twitter.com/linuxium/status/1310811257281286146

maurer
maurer
4 years ago

@cnx-software – what router do you use so you get almost 1gbps on wifi ?

Linuxium
4 years ago

A WiFi 6 one (ASUS RT-AX88U) to make use of the NUX9i9QNX’s (802.11ax) integrated Intel Wi-Fi 6 AX200 (Gig+) adapter.

Boardcon Rockchip and Allwinner SoM and SBC products