iKOOLCORE R2 Max review – 10GbE on an Intel N100 mini PC with OpenWrt (QWRT), Proxmox VE, Ubuntu 24.04 and pfSense 2.7.2

I’ve already checked out iKOOLCORE R2 Max hardware in the first part of the review with an unboxing and a teardown of the Intel N100 system with two 10GbE ports and two 2.5GbE ports. I’ve now had more time to test it with an OpenWrt fork, Proxmox VE, Ubuntu 24.04, and pfSense, so I’ll report my experience in the second and final part of the review.

As a reminder, since I didn’t have any 10GbE gear so far, iKOOLCORE sent me two R2 Max devices, a fanless model and an actively-cooled model. I was told the fanless one was based on Intel N100 SoC, and the actively-cooled one was powered by an Intel Core i3-N305 CPU, but I ended up with two Intel N100 devices. The fanless model will be an OpenWrt 23.05 (QWRT) server, and the actively cooled variant be the device under test/client with Proxmox VE 8.3 server virtualization management platform running virtual machines with Ubuntu 24.04 and pfSense 24.11.

The review is quite long, so here are some shortcuts to the main sections:

OpenWrt (QRWT) installation and configuration on iKOOLCORE R2 Max

By default, OpenWrt will not have the drivers for the AQC113C-B1-C 10GbE network card, so IKOOLCORE prepared an “OpenWrt 23.05” image, or rather an image based on a fork called QWRT with all necessary drivers. It’s also configured to have the WAN port on the 2.5GbE port as shown in the photo below, and the other 2.5GbE port and two 10GbE ports on a LAN with 192.168.1.0 subnet and DHCP server enabled.

iKOOLCORE R2 Max OpenWrt LAN WAN Ports

So I connected the WAN port to a 2.5GbE switch to have Internet connectivity, and the 10GbE LAN port on the right to my laptop to access the web interface for configuration. I also connected an HDMI monitor to check for potential warnings and/or error messages.

I downloaded QWRT-R24.11.18-x86-64-generic-squashfs-combined-efi.img.gz and flashed it to a USB flash drive:


Time to insert the USB flash drive into one of the USB 3.0 ports of the iKOOLCORE R2 Max, and apply power. The boot was a bit slow, but after a while, we could see some messages from the monitor.

iKOOLCORE R2 Max OpenWrt Instllation USB flash drive

Since my broadband router is also using 192.168.1.1, I temporarily disconnected the WAN port to configure OpenWrt/QRT through the LuCI web interface at http://192.168.1.1. The default username is “root”, and the default password is “Password”

iKOOLCORE R2 Max OpenWrt Login

All good, except it’s a little difficult to navigate if you can’t read Chinese…

QRT Luci Dashboard Chinese

So let’s change that to English by going to “”, then selecting the tab “”, and finally selecting English in the “” dropdown list.

QWRT Change Language to English

Click on Save. It will still be in Chinese, but if you navigate to another page, the interface will switch to English. The overview section confirms we have an R2 Max with an Intel N100 CPU running QWRT R24.11 on top of the latest Linux 6.12.

QWRT English Interface

Let’s change the LAN subnet because it conflicts with my broadband router. To do this, I went to Network->Interfaces

iKOOLCORE R2 Max OpenWrt WAN port

Selected the LAN tab before scrolling down to set the IPv4 address to 192.168.4.1 instead of 192.168.1.1.

iKOOLCORE R2 Max OpenWrt LAN IPv4 address

After clicking on Save, we need to wait a little bit before reconnecting to the web dashboard using http://192.168.4.1, and here I can see my laptop got an updated IP address in the new subnet. I can also reconnect the WAN cable at this stage.

iKOOLCORE R2 Max QWRT new LAN IP address

A final change is to update the router password to something more secure than “Password”.

QWRT Luci Change Password

That’s great, but all the changes I’ve made are on the USB flash drive. I’d like to install QWRT to the internal SSD. I did that by following the instructions on OpenWrt’s documentation website. First I access the system through SSH with the password I’ve just set:


From here, we need to install and run lsblk:


output from the last command:


OpenWrt is installed on /dev/sda, and the SSD is located at /dev/nvme0n1. So let’s dump the content of the former on the latter.


I can shut down the system, remove the USB drive, and boot the system again. Everything works as before, but booting from the internal SSD.

Proxmox VE installation and configuration

Now that we’re done with the “OpenWrt” server installation and configuration, let’s install Proxmox VE on our actively cooled iKOOLCORE R2 Max acting as the DUT/client.

The latest version is currently Proxmox VE 8.3, so I downloaded the ISO and dumped it to a USB flash drive with dd, since it’s not recognized in Startup Disk Creator on Ubuntu:


The installation is the same as when I installed Proxmox VE 8.1 on the iKOOLCORE R2 last year. Most of the time we just need to click on the Next button, but the important part is in the Management Network Configuration where we need to enter a hostname using a fully-qualified domain name (ikoolcore-r2-max-cnx.local), the IP address (192.168.4.253), gateway IP address (192.168.4.1), and DNS server (192.168.4.1).

iKOOLCORE R2 Max Proxmox VE Installation FQDN CIRD Gateway

We can double-check all those parameters and the installation drive (/dev/nvme0n1) before completing the installation.

Proxmox VE Installation Summary

We can now go to 192.168.4.253:8006 or ikoolcore-r2-max-cnx.local:8006 in a web browser to access the Proxmox VE dashboard using the username and password specified during the installation.
Proxmox VE login

iKOOLCORE R2 Max Proxmox VE Dashboard

Everything looks good, and we’ll eventually need to add some guest operating systems, but first let’s test 10GbE performance using OpenWrt in the fanless model, and Proxmox VE in the other model. The main reason is to test the interfaces without adding any virtualization layer.

iKOOLCORE R2 Max 10GbE testing with iperf3

I connected the left 10GbE ports of each device with a one-meter Ethernet cable (probably Cat 5), while the OpenWrt machine’s 2.5GbE ports were connected to my Laptop, and the other to a 2.5GbE switch for Internet connectivity.

OpenWrt Proxmox VE 10GbE test bed

Let’s SSH to the Proxmox VE machine to check the link speed:


All good. Let’s now run iperf3 on the QWRT server, and on the PVE client to test:

  • Upload from the client:

  • Download from the server:


Upload is fine at 9.41 Gbps, but download could be better at 8.53 Gbps.

Let’s try full-duplex (bidirectional transfer) to see what happens:


It’s not too bad, but not optimal. Since I don’t have a Cat6 cable, I read that I may need a shorter Cat 5/5E cable, so I installed a 20cm cable, and repeated the test:


It did not help. IKOOLCORE also provides a script to monitor CPU usage and temperature as well as network card temperature which I ran in Proxmox VE (Debian-based), and the network card does not get too hot under an iperf3 test (for one minute), we may try later.

aqc113c-b1-c script temperature monitoringI also noticed lots of variability in each test with sometimes faster speeds.


After discussing with the company, they told me the bidirectional speed would not reach close to 10Gbps on the N100 model, but it would work with the Core i3-N305. In any case, 10Gbps Rx or Tx could be reached with Windows 11 on Proxmox VE on an R2 Max N100 and a Synology DS1821+ 10GbE NAS on the other side. So I also decided to buy 1-meter Cat6 Ethernet cables to make sure the cable was not the culprit.

CAT6 LAN Cables

Let’s try the test again with one of the Cat6 cables:

  • Upload:

  • Download

  • Full duplex


It did not help at all. But then, I noticed 100% CPU usage on a single core during the iperf3 bidirectional test.

iperf3 single core CPU usage

So I looked for a way to run iperf3 on all four cores and eventually found out that iperf 3.16 now supports multi-threading and documented my experience.  The QWRT image already had iperf 3.17.1, but the Proxmox VE was using iperf 3.12. So I built it iperf 3.18 from source, to run the test. I will not reproduce this here, so let’s go directly to the results.

Since we can now try parallel streams using all four cores with “-P 4”, let’s try the download test again:


9.41 Gbps is all good. And now, bidirectional test with -P 2 since it’s enough and less verbose:


The output is quite noisy, but we can see the final results with the [SUM] [TX-C] at 9.39 Gbps and [SUM] [TX-C] at 9.41 Gbps. The good news is that the 10GbE interface can be saturated but we need at least two cores since a single core is a bottleneck.

I also moved the cable to the second 10GbE interface on the fanless router, and it yielded the similar results:


So we know the hardware is perfectly capable of handling 10GbE in either or even both directions when multiple cores are used.

Ubuntu 24.04 and pfSense CE 2.7.2 installation in Proxmox VE

We’ll now need to find what happens when virtualization needs to be taken into account. First, I enable hardware passthrough in Proxmox VE as I did with the iKOOLCORE R2 last year:


We need to download the Ubuntu 24.04 and pfSense CE 2.7.2 ISO files and upload them to our ProxmoxVE instance. pfSense now requires registration and makes you download a “netgateinstaller” image, but you can get the pfSense-CE-2.7.2-RELEASE-amd64.iso file from a mirror if you prefer.

Proxmox VE 8.3 Upload ISO

For the Ubuntu 24.04 installation, I mostly followed the instructions to install Ubuntu 22.04 on Proxmox with video and USB passthrough so that I could also use the HDMI port, USB mouse, and USB keyboard connected to the device. I won’t go through all the steps again but just provide a summary instead. I created a Virtual Machine with the following parameters (quad-core, 4MB RAM, 64GB HDD).

Ubuntu 24.04 Proxmox VE Installation

After clicking on Start, I went through the Ubuntu 24.04 installation, and removed the ISO file from the VM, to boot the OS from its HDD.

At this point, Ubuntu would show in the Proxmox VE’s console, so I still had more work to do to directly use an HDMI monitor and USB peripherals with the iKOOLCORE R2 Max. So I also added two PCIe and two USB devices for GPU and keyboard/mouse passthrough, downloaded and copied the file gen12_gop.rom and gen12_igd.rom (renamed to igd.rom) to /usr/share/kvm, and edited the /etc/pve/qemu-server/100.conf configuration file accordingly. I ended up with the following configuration for Ubuntu 24.04.

Proxmox VE R2 Max Ubuntu 24.04 configuration passthrough

After restarting the machine, I could have Ubuntu 24.04 running with HDMI output, a USB keyboard, and a USB mouse without having to use the Proxmox VE web interface. We’ll do 10GbE testing, but as you can see from the photo below, the first results look good.

iKOOLCORE R2 Max Review Proxmox VE Ubuntu 22.04

Let’s now try pfSense, but I’m not confident here since a reader commented:

I have the N305 model. The 10gbe chip isn’t supported in freebsd, so can’t passthrough to opnsense in proxmox…

Didn’t stop me from getting 6.2Gbit/s through opnsense after a bit of tuning though!

In a normal setup, you’d probably want the two 10GbE ports for the pfSense firewall and use one of the two 2.5GbE ports for the desktop OS like Ubuntu 24.04. But for testing purposes, I wanted to have one 10GbE port in Ubuntu 24.04 and the other 10GbE port in pfSense so I added two Linux bridges for pfSense so we got the following network interfaces configured in Proxmox VE:

  • vmbr0 – enp8s0 (10GbE via Marvell AQC113C-B1-C) with IP address: 192.168.4.253 for Proxmox VE (already configured)
  • vmbr1 – enp7s0 (10GbE via Marvell AQC113C-B1-C) with IP address: 192.168.4.252 for pfSense WAN
  • vmbr2 – enp2s0 (2.5GbE via Intel i226-V) with IP address: 192.168.4.1 for pfSense LAN

Proxmox VE Network Interfaces

Remember to click on “Apply Configuration” here, or Proxmox VE will complain the vmbr1 interface does not exist when starting the VM.

Here’s the configuration for pfSense 2.7.2 in Proxmox VE after I added vmbr1 and vmbr2. One important part is selecting the Default (SeaBIOS) for the BIOS, not UEFI, or the boot will fail.

Proxmox VE pfSense 2.7.2 hardware

I installed pfSense in Proxmox using the same instructions I followed for the iKOOLCORE R2 last year. The WAN was set to use DHCP from the OpenWrt server, and LAN to 192.168.6.1. Note that vmbr1 (10GbE) needs to be assigned to vtnet1 and vmbr2 (2.5GbE) to vtnet0.

pfSense 10GbE WAN 2.5GbE LAN configuration

iKOOLCORE R2 Max pfSense 2.7.2 WAN LAN Configuration

So now, I’ll move the Ethernet cable from my laptop to the 2.5GbE to complete the pfSense configuration through 192.168.6.1. Note the default admin username and pfsense password must be used for the initial setup. I’ll be asked to change the admin password at the end of the wizard. Those steps are also covered in the previous pfSense instructions, so I won’t go into details here. At the end I had pfSense up and running on my machine.

iKOOLCORE R2 Max pfSense 2.7.2 dashboard

I could not access the Internet though, because of a DND resolution issue. I went to Services->DNS Resolver and enabled “DNS Query Forwarding” to solve the issue.

pfSense DNS Query Forwarding

 

So now that I can browse the web from the 192.168.6.0 subnet and also access 192.168.4.0 for testing 10GbE we can carry on with performance evaluation.

Intel N100 10GbE testing in Proxmox VE with Ubuntu and pfSense

Let’s stop the pfSense VM for now, and start the Ubuntu VM to test the 10GbE interface again with iperf3  (version 3.16) again:

  • Upload

  • Download


Both upload and download tests can reach 9.41-9.42 Gbps. That’s promising. Let’s now try a full-duplex transfer:


That is 7.48 Gbps and 6.91 Gbps when both Rx and Tx are used simultaneously. I monitored the CPU usage in Ubuntu and Promox VE (see htop below), and the CPU does not seem to be the bottleneck here.

Proxmox VE Ubuntu CPU usage htop

So virtualization does introduce a bottleneck for bidirectional transfers, but I’m not quite sure what the source is. If you’re running a download-only HTTP/FTP server, it will not matter, but a torrent server might be impacted depending on the traffic, and it’s better to run the OS directly on the hardware rather than through Proxmox VE. If I missed an optimization let me know in the comments section.

Time to turn off the Ubuntu VM, and start the pfSense VM. We’ll need iperf3 package. We can normally intall it from System->Package Manager, but this did not work… with the error “Unable to retrieve package information”.

pfSense Package Manager Unable to retrieve package information

Looking for help on the netgate forums is a pain from my side, since they block IP addresses from Thailand, but I found a solution on Reddit instead. Simply open an SSH terminal, and run the following command:


Now I can install iperf3 in the web interface…

iperf 3.15 pfSenseThat would be iperf 3.15… It’s not ideal for our testing, since we would like at least iperf 3.16 to have multi-thread support… So I removed the package, and download and installed the iperf3.18 package from the command line:


We can check the installation was successfull:


We can finally test 10GbE with pfSense.

  • Upload

  • Download

  • Full-duplex


We’re quite far from the 9.42 Gbps target. Upload is 7.26 Gbps, download is 3.18 Gbps, and full-duplex is 3.33/2.36 Gbps. There’s also a lot of variability during the test. Note that it’s in a dual-core pfSense VM running in Proxmox VE. I know nothing about FreeBSD, and there may be optimization that improves that. Without virtualization, results should be quite better, but I’ve already spent so much time on this review, that I’ll skip that test…

iKOOLCORE R2 Max 10GbE Intel N100 mini PC review

What I’ll do is fire up the Ubuntu 24.04 VM and run iperf3 tests from the Ubuntu server to the pfSense client (the other way around does not work with the default firewall configuration), in what should be the worst case scenario:

  • pfSense to Ubuntu

  • Ubuntu to pfSense

  • Bidrectional (full-duplex)


Here’s htop during the Ubuntu to pfSense test. Asking an Intel N100 to handle two virtual machines communcationg over 10GbE on the same machine is a bit too much to ask :).

Proxmox VE HTOP 10GbE pfSense Ubuntu iperf3

Conclusion

The iKOOLCORE R2 Max can perfectly handle 10GbE networking with its Intel N100 quad-core CPU for unidirectional and bidirectional (full-duplex) transfers, but multiple cores may have to be used especially for bidirectional transfers. I could achieve 9.41 Gbps in all cases using iperf3 between an R2 Max with OpenWrt (server) and an R2 Max with Proxmox VE (Debian client) as long as multithreading is used.

When we introduced virtual machines in Proxmox VE, the results varied more and depended on the selected OS. For example, I got respectable 9.42 Gbps DL, 9.42 Gbps UL, and 7.48 Gbps/6.91 Gbps (full-duplex) with an Ubuntu 24.04 VM, but results dropped quite a bit in a pfSense 2.7.2 VM at 7.26 Gbps DL, 3.18 Gbps UL, and 3.33/2.36 Gbps (FD). The worst-case scenario was a full-duplex transfer with each 10GbE assigned to their own virtual machine, and here iperf3 reported just 2.03 Gbps/1.06 Gbps due to CPU (and likely other) bottlenecks. Better performance should be obtainable with the Core i3-N305 model in that case.

I’d like to thank iKOOLCORE for sending the R2 Max for review. The N100 and Core i3-N305 mini PCs sell for respectively $349 and $449 in the review configuration (8GB RAM, 128GB SSD), but you can get a barebone system for as low as $299. You can also get an extra 5% off with the CNXSOFT coupon code

Share this:

Support CNX Software! Donate via cryptocurrencies, become a Patron on Patreon, or purchase goods on Amazon or Aliexpress

ROCK 5 ITX Rockchip RK3588 mini-ITX motherboard
Subscribe
Notify of
guest
The comment form collects your name, email and content to allow us keep track of the comments placed on the website. Please read and accept our website Terms and Privacy Policy to post a comment.
0 Comments
oldest
newest
Boardcon Rockchip RK3588S SBC with 8K, WiFI 6, 4G LTE, NVME SSD, HDMI 2.1...