An affordable 10GbE capable NAS
Background
Two weeks ago, in early May, Tom Cubie of Radxa asked me if I would be interested in testing their new ROCK 5 ITX board. I've followed a little bit the development of this really nice board, and Tom probably valued some of my tests and comments during the ROCK 5B debug party. And he knows I'm not shy when I disagree with certain choices, and he confirmed he doesn't seek a complacent review at all, so that was enough for me to accept the offer, because I'm one of those who believe that making issues public is the most efficient way to collectively find the best solution to them.
First impressions
Unpacking
A week later I received a parcel from DHL. First impression, the board is packaged just like a PC mother board. There's a rationale for this, it's an ITX form factor, aiming at being installed in a PC enclosure. The customer must feel in known territory when installing it ;-)
Front panel and buttons
Like on a PC mother board, there's a front panel connector with Power switch, Power LED, Reset switch, HDD LED. These are nice improvements since many Arm-based boards are missing a reset button which would be appreciated during development and kernel porting. However, here a button could have been placed on the right of the SPDIF connector for debugging periods when the card is left on a desk without anything connected to this front panel connector. That's no big deal since a screw driver suffices to make contact to the pins, but it would be cleaner. Some PC mother boards have adopted this principle nowadays by the way:
Installing a cooling solution
One excellent design choice on this board concerns the cooling. Radxa adopted a design compatible with LGA115x thermal solutions. This means that instead of having to resort to mixes of inefficient and inconvenient solutions as is often the case, here reusing an old PC heat sink will work. I found one from an old 1U server in my tray and decided to install it. It even has the PWM pin to control the fan's speed (which I won't use except for testing).
Installing a serial console
Another point to note regarding the connectors is that there is no externally accessible serial console, though there's a pin header on the board next to the micro-SD connector. Serial connectors are still needed in the Arm world because of the boot loaders. While you can often do most of the day-to-day operation using an HDMI display and a USB keyboard, each time the machine fails to boot, the only option is to pick the screw driver, open the box and connect a UART connector inside to fix the boot problem. This tends to be less of a problem with systems adopting the Arm SystemReady approach which provides a PC-like UEFI BIOS where you can really control everything from the early boot, and normally don't have to fiddle with low-level commands just to load a recovery kernel. Here there's no UEFI at the moment so the only recovery option is the serial port. And in general I don't like the idea of having to plug/unplug a screen connector and move it around in the rack between all my machines, it even happens quite often that a sick machine fails to enable the frame buffer and display anything. The USB serial console is much easier to use and allows for multiple machines to cross-connect so that all your machines in the rack are accessible at the same time from the same display.
I noticed that the connector has the same pinout as more and more boards I'm seeing these days, so I could reuse an adapter I prepared for another board. This one is cheap and based on a CH340N. It's tiny, only requires to solder 3 wires, doesn't cause trouble when not powered, and support Rockchip's speed of 1.5 Mbps.
Surprise of the power connector
When trying to plug the 12V adapter cable that was lying on my desk, I noticed it wouldn't enter. I looked closer and saw a huge central pin. Grrr... it's a 2.5mm one. These ones are really really not common. I checked all my adapters here (about 20, all voltages included). None of them had a 2.5mm connector, all were 2.1mm. Fortunately I had a 2.5mm male jack and a 2.1mm female one, so I could make an adapter to connect the 12V power block.
I suspect that the reason for using a larger connector than usual is to make sure users don't accidentally connect a laptop 19V power input. That's understandable of course. Another option could be to make the board accept a wider input voltage range. Some PC boards do this. For example some boards will take 8 to 25V, and will only need more than 12 if they really need to deliver 12V. Most of the time the 12V pins are not used, they're basically only used for SATA spinning disks, but most often not even for SSDs.
First power up
Once plugged in, the console immediately shows that it sees 8 GB of LPDDR5 DRAM installed as 4 banks of 2 GB each and that the speed is configured to 2400 MHz, hence 4800 MT/s:
DDR 9fffbe1e78 cym 24/02/04-10:09:20,fwver: v1.16 LPDDR5, 2400MHz channel[0] BW=16 Col=10 Bk=16 CS0 Row=16 CS=1 Die BW=16 Size=2048MB channel[1] BW=16 Col=10 Bk=16 CS0 Row=16 CS=1 Die BW=16 Size=2048MB channel[2] BW=16 Col=10 Bk=16 CS0 Row=16 CS=1 Die BW=16 Size=2048MB channel[3] BW=16 Col=10 Bk=16 CS0 Row=16 CS=1 Die BW=16 Size=2048MB Manufacturer ID:0x6 CH0 RX Vref:29.7%, TX Vref:19.0%,0.0% CH1 RX Vref:31.0%, TX Vref:19.0%,0.0% CH2 RX Vref:31.8%, TX Vref:20.0%,0.0% CH3 RX Vref:28.5%, TX Vref:20.0%,0.0% change to F1: 534MHz change to F2: 1320MHz change to F3: 1968MHz change to F0: 2400MHz
The system starts to boot to an pre-installed debian 11 image and presents a login prompt showing that the host name is called "roobi". The console is polluted a bit by some bluetooth messages (there's no BT device on this board):
[ 24.409994] dma-pl330 fea30000.dma-controller: fill_queue:2263 Bad Desc(2) [ 24.463642] dma-pl330 fea30000.dma-controller: fill_queue:2263 Bad Desc(2) [ 24.685977] Bluetooth: hci0: command 0xfc18 tx timeout Debian GNU/Linux 11 roobi ttyFIQ0 roobi login: [ 27.981985] dma-pl330 fea30000.dma-controller: fill_queue:2263 Bad Desc(2) [ 32.792957] Bluetooth: hci0: BCM: failed to write update baudrate (-110) [ 32.793062] Bluetooth: hci0: Failed to set baudrate [ 34.926158] Bluetooth: hci0: command 0x0c03 tx timeout [ 42.819615] Bluetooth: hci0: BCM: Reset failed (-110) roobi login:
At this point I tried many combinations of"root/root", "rock/rock", "radxa/radxa", "roobi/roobi", but none worked, so I looked on the net and couldn't find any relevant info. When rebooting I noticed that u-boot proposes a second boot choice:
U-Boot menu 1: Debian GNU/Linux 11 (bullseye) 5.10.110-33-rockchip 2: Debian GNU/Linux 11 (bullseye) 5.10.110-33-rockchip (rescue target) Enter choice: 2
But the result is basically the same, no valid login/password found:
Cannot open access to console, the root account is locked. See sulogin(8) man page for more details. Press Enter to continue. [ 24.167338] Bluetooth: hci0: command 0xfc18 tx timeout [ 32.060943] Bluetooth: hci0: BCM: failed to write update baudrate (-110) [ 32.061077] Bluetooth: hci0: Failed to set baudrate [ 34.194156] Bluetooth: hci0: command 0x0c03 tx timeout [ 42.087379] Bluetooth: hci0: BCM: Reset failed (-110) [ 87.668622] dma-pl330 fea30000.dma-controller: fill_queue:2263 Bad Desc(2) [ 87.735524] dma-pl330 fea30000.dma-controller: fill_queue:2263 Bad Desc(2) Debian GNU/Linux 11 roobi ttyFIQ0 roobi login: [ 91.286888] dma-pl330 fea30000.dma-controller: fill_queue:2263 Bad Desc(2)
Request for help and first surprises
Since the board is pretty new, I suspected that the login/pass were well-known by a few users and not yet put into an easy to find documentation. I knew that a few other people had got their hands on this board as well, so I asked for help on the Radxa forum. Thomas Kaiser responded, suggesting that apparently I was not supposed to log in there, because this "roobi" image was in fact an installer that's supposed to be used via a keyboard+display or via a browser. Some doc for it is found here.
A first feeling of over-engineering and needless complexity started to build up. Usually it's as simple as downloading an image of choice, writing it on a micro-SD or USB thumb drive, plugging it and booting off it, and you're done. I generally don't like installers that tend to half-work and not to let you decide what nor how you install, or easily leave the system in an unrecoverable state. Thomas found in the image build scripts that a user "ps/ps" is created.
I tried it and this user worked, even though it shows some syntax errors in some scripts:
roobi login: ps Password: Linux roobi 5.10.110-33-rockchip #65700d485 SMP Wed Apr 3 04:26:57 UTC 2024 aarch64 The programs included with the Debian GNU/Linux system are free software; the exact distribution terms for each program are described in the individual files in /usr/share/doc/*/copyright. Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent permitted by applicable law. Last login: Fri May 17 19:29:29 UTC 2024 on tty1 -bash: [: : integer expression expected -bash: [: : integer expression expected -bash: [: : integer expression expected ps@roobi:~$
This account is allowed to sudo and from there it's possible to create new users and install packages, so I could run some frequency tests and DRAM latency tests locally. BTW, sshd is already present and enabled, and gcc is there as well.
After all these tests were done, I tried to connect using a browser to this machine's IP on port 80. I was presented with an installation screen, that required me to contact the power connector 3 times to validate that I was on the right board:
Due to remote access to this device, authentication is required. After clicking the start button, please press the power button three times within 60 seconds to complete the verification.
Note that I still had a root shell on this machine over the network, so it's definitely not a security measure, most likely it's just a way to avoid mistakes when installing multiple boards in parallel.
From there I could choose to install one between two possible debian images (how one is supposed to install other operating systems with this is unknown for now, maybe one needs to enter a specific URL, still complicated when you already have your image on an SD),.
When choosing the installation target, the installer lists available block devices. Here, "no device available" is displayed.
Thomas suggested that Radxa did not intend for the eMMC to be usable by the end user and instead waste it to host this absurdly huge installer (a full fledged debian distro). Now this is where it sounds absurd: they design a really great device, that corresponds exactly to the design one would expect as a server, with the right choice of connectivity, storage and extensions, but someone passes after this and says "no, please leave me the eMMC, I would like the installer to stay there forever so that I can spend my life reinstalling this board every day".
OK, I'm a bit sarcastic, but why suddenly ruin valid and efficient use cases for the sake of keeping an installer there that you'll need only once in the product's life ? Micro-SD is made for this! There are so many owners of competing boards that lack eMMC and that are asking to get one that it's really not morally acceptable to sacrifice the eMMC for an unused installer.
Migration of the installer to SD
The eMMC's layout is the following:
- 0 to 16MB: U-Boot, before the first partition
- 16 to 32MB: partition 1, 16MB FAT, mounted in /config
- 32 to 332MB: partition 2, 300 MB EFI, unused for now
- 332MB to 7.3G: partition 3, 7G ext4, debian 11 for installer
More precisely it looks like this:
$ sudo fdisk -l /dev/mmcblk0 Disk /dev/mmcblk0: 7.28 GiB, 7818182656 bytes, 15269888 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: gpt Disk identifier: 388778EF-E065-4B77-A699-D0C2B1832E62 Device Start End Sectors Size Type /dev/mmcblk0p1 32768 65535 32768 16M Microsoft basic data /dev/mmcblk0p2 65536 679935 614400 300M EFI System /dev/mmcblk0p3 679936 15269854 14589919 7G EFI System
Let's have a look at what options are offered to us by U-Boot. For this, we need to enter U-Boot. It interrupts during the first second when it lists the available system images, just pressing Ctrl-C returns to the U-Boot prompt:
=> printenv
... boot_targets=nvme0 mmc1 mmc0 mtd2 mtd1 mtd0 usb0 pxe dhcp
...
distro_bootcmd= ... for target in ${boot_targets}; do run bootcmd_${target}; done
OK so the boot loader will check mmc1 (micro-SD) before mmc0 (eMMC). Let's just copy eMMC to a 8 GB micro-SD and try again. The boot loader will still be loaded from the eMMC (first 16 MB), and the kernel loaded from the SD. We'd like the OS to load from the SD as well, so for this we'll need to rename the UUID on the SD and adjust it in the extlinux.conf file so that only the SD is used. Let's reboot to the installer, insert a 8 GB minimum micro-SD, and prepare it this way:
roobi login: ps Password:
ps@roobi:~$ sudo -s
# eval $(blkid -o export /dev/mmcblk0p3)
# echo $UUID
b055efba-0f72-448b-927c-07f40f2714c8
# NEW_UUID=$(cat /proc/sys/kernel/random/uuid)
# echo $NEW_UUID
5ee2e87c-0db4-4235-a5ec-aabe607d9c48
# dd if=/dev/mmcblk0 of=/dev/mmcblk1 bs=1M status=progress
# cat /proc/partitions
# e2fsck -f /dev/mmcblk1p3
# tune2fs -U $NEW_UUID /dev/mmcblk1p3
# mount /dev/mmcblk1p3 /mnt/
# sed -i -e "s/$UUID/$NEW_UUID/g" /mnt/boot/extlinux/extlinux.conf
# umount /mnt/
# reboot
Now the system reboots from the micro-SD. Logging into the system shows that it has only mounted the SD (mmcblk1) and not the eMMC (mmcblk0).
Installation to eMMC
Using the browser again to connect to the installer shows that now it properly lists /dev/mmcblk0 as an available installation device. Yay!
However when trying to continue, it starts to scare you by suggesting that everything will be wiped:
This made me hesitate for a while, but I thought it wouldn't make sense to wipe the boot loader parts on a target device from which the system later hopes to possibly boot (e.g. a micro-SD). So I finally confirmed and it installed on it. It took a few minutes after which it automatically rebooted. Of course, since I had left the SD card in, it booted again on the installer, but this showed me that it hadn't wiped the boot loader at least. Removing the SD card and booting again this time ended up with the Debian 11 prompt corresponding to the new image:
Debian GNU/Linux 11 rock-5-itx ttyFIQ0
roobi login: rock
Password:
Linux rock-5-itx 5.10.110-33-rockchip #65700d485 SMP Wed Apr 3 04:26:57 UTC 2024 aarch64
The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
rock@rock-5-itx:~$
The contents of /etc/passwd show that radxa/radxa is also valid. It's nice to see that the distro is not too fat and that there's plenty of room left to install whatever one wants to install on their NAS:
$ df
Filesystem 1K-blocks Used Available Use% Mounted on
udev 3929848 0 3929848 0% /dev
tmpfs 813580 1700 811880 1% /run
/dev/mmcblk0p3 7116452 1396500 5393508 21% /
tmpfs 4067900 0 4067900 0% /dev/shm
tmpfs 5120 4 5116 1% /run/lock
/dev/mmcblk0p1 16112 1 16111 1% /config
tmpfs 813580 4 813576 1% /run/user/1001
Pfew... that was a bit more complex than usual and than needed but it was worth it in the end. Now we have the target operating system set up and running on the board. One annoying point with such an installer is that you copy a pre-installed distro to your system, you're not offered the choice to choose the FS layout for example. It's as if you just used "dd" to dump the .img to the target device.
In fact what should be done with this installer is that it lets you select an installation image for the distro of your choice, that you deposit on a micro-SD and that you then boot from to install on the target device(s).
Connecting SATA SSDs
For a NAS, one needs to have storage devices and cabling. I didn't intend to receive the board that fast and I thought I didn't have power cables. But after digging in my boxes, I managed to find sufficient cable converters to connect 4 devices. I did have 4 used 120 GB intel SSD. They're not extremely fast but at least they do work so I started to hack with them. The result looks like an ugly octopussy :-)
Running a simple basic test of all disks at once
I'd intend to use these SSDs in RAID5, as an NFS server doing mostly reads though writes will be needed as well of course. Let's first see what the whole devices are capable of in terms of read speed. For this I'm just running vmstat 1 while reading from 1 disk first, then from all 4 disks at once:
# dd if=/dev/sda of=/dev/null bs=1M
$ vmstat 1
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
0 0 0 7851296 8552 95092 0 0 0 0 110 114 0 0 100 0 0
1 0 0 7625296 233644 95240 0 0 224800 0 972 757 0 5 95 0 0
0 1 0 7301748 556392 95900 0 0 323072 0 1283 1053 0 5 95 1 0
1 0 0 6977948 879464 96804 0 0 323072 0 1289 1073 0 5 95 1 0
0 1 0 6653140 1203560 97540 0 0 324096 0 1301 1080 0 5 94 1 0
0 1 0 6328852 1527144 98204 0 0 323584 0 1289 1084 0 5 95 1 0
# for i in a b c d; do dd if=/dev/sd$i of=/dev/null bs=1M & done
$ vmstat 1
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
0 0 0 7848444 8548 93428 0 0 0 0 97 116 0 0 100 0 0
0 4 0 6773340 1082724 95060 0 0 1074224 0 3942 3252 0 17 53 30 0
1 3 0 5573384 2279780 97580 0 0 1197056 0 3932 3607 0 8 49 42 0
1 3 0 4365080 3485540 100276 0 0 1205760 0 3962 3657 0 7 50 43 0
2 2 0 3155736 4696504 103184 0 0 1210880 0 4055 3668 0 10 49 41 0
0 4 0 1927256 5922148 105972 0 0 1225728 0 4043 3705 0 8 50 42 0
0 4 0 711880 7135076 108428 0 0 1212928 0 4010 3670 0 9 49 42 0
3 1 0 294572 7548632 110632 0 0 1211904 0 4194 3993 0 11 49 40 0
This test was running on the big cores, which were mostly waiting for the disks (42%) and using little CPU (8-10%). Running on the little cores instead showed almost the same performance:
# for i in a b c d; do taskset -c 0-3 dd if=/dev/sd$i of=/dev/null bs=1M & done
$ vmstat 1
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
0 0 0 7851584 8548 94188 0 0 0 0 98 102 0 0 100 0 0
2 2 0 7679972 179556 93864 0 0 171076 0 819 530 0 10 89 1 0
1 3 0 6508936 1348452 96700 0 0 1168896 0 3931 3619 0 19 72 8 0
3 1 0 5343568 2511204 98952 0 0 1162752 0 3852 3595 0 17 74 9 0
3 1 0 4177208 3674548 101952 0 0 1163264 0 3837 3617 0 18 73 9 0
4 0 0 2993412 4855344 104268 0 0 1180672 0 3951 3660 0 19 72 9 0
0 4 0 1807820 6038372 107168 0 0 1183232 0 3953 3650 0 18 73 9 0
1 3 0 630820 7212876 109680 0 0 1174016 0 3931 3663 0 19 72 9 0
Here the CPU is used approximately at twice the load, and iowait is much smaller, indicating that there's less margin on the CPU. But the performance is almost the same, at 1.17 GB/s vs 1.21 for the big cores.
One will note that this bandwidth is slightly smaller than the sum of all 4 SSDs (4*323 = 1292 MB/s vs 1210 MB/s measured). Could we be hitting a wall ? Note that for 10 GbE it's fine because that's slightly above the limit of what one can transfer over TCP at 10 Gbps. But it's still interesting to know.
The SATA controller is connected using 2 PCIe 3.0 lanes. Each lane is running at 8 GT/s, encoded using 128/130 coding (128 bits transported over 130 bits). The MaxPayload is at the minimum, 128 bytes:
root@rock-5-itx:~# lspci -nnvv -s 0001:11:00.0 | grep -A2 Ctl:
DevCtl: CorrErr- NonFatalErr- FatalErr- UnsupReq-
RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
MaxPayload 128 bytes, MaxReadReq 256 bytes
--
LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
LnkSta: Speed 8GT/s (ok), Width x2 (ok)
This is a 64-bit system so the PCIe overhead is 26 bytes. Thus each transfer of 128 bytes requires extra 26 bytes, that's 154 bytes total, or a transfer efficiency of 83.1%, which becomes 81.8% once the wire encoding is taken into account. Thus the absolute maximum bandwidth with the SATA controller, not taking into account commands and control stuff, is 81.8% of 2*8 GT/s = 13.09 Gbps, or 1.63 GB/s or 1.52 GiB/s. We're not there yet, only at ~75% but the margin is small. Anyway, as previously said, we're already reaching the speed that at 10 GbE Ethernet adapter could deliver, so there's nothing lost here, and the choice of assigning 2 lanes to the SATA controller was the best one.
Connecting a 10 GbE NIC
For a 10 GbE NAS, one will need a 10 GbE NIC. I already had such a device that I bought for testing the ROCK 5B, and knew that it now works pretty well. The NIC fits nicely into the M.2 adapter, and has fixature for the RJ45 port to use a regular slot:
This NIC is made around a chip ACQ107 and requires the module called "atlantic" that's enabled via "CONFIG_AQTION" under "CONFIG_NET_VENDOR_AQUANTIA". Unfortunately it's not enabled in the default kernel. I needed to rebuild the kernel, but it's a bit complicated to find the relevant sources. The wiki says that there's a bsp tool used to download, patch and build the kernel. I already have tons of kernel versions here ("git branch | wc -l" reports more than 400) and having to go through such a process when you know you already have most of it, and having to learn yet another tool is not the most developer-friendly solution IMHO.
With that said, at least "bsp" is a clean and readable script, at least it's not a Python horror that requires to download half of Github. But it's still annoying to have to read a script to figure what directory's config file to read to find the kernel's URL. I found a few candidate URLs, one of them being Joshua Riek's rk-5.10-rkr6. (I later found that it was apparently linux-5.10-gen-rkr3.4 from Radxa's repo that ought to be used).
I also noticed a 6.1 kernel ("linux-6.1-stan-rkr1"). I tried this one first, at least to gauge the progress of the porting to newer kernels. This one properly detected the 10GbE NIC but not the SATA, there were PCIe errors and I'm wondering if it properly applies the bifurcation to see two distinct devices. It was a bit late at night, so I quickly gave up. Instead I tried with "rk-5.10-rkr6" that's based on 5.10.160 (vs 5.10.110 for rkr3.4), and everything works there, both SSD and the 10GbE NIC!
BTW for those interested in using the repositories above, issuing "make rockchip_defconfig" is all you need to do to configure the base kernel (after setting your ARCH and CROSS_COMPILE as usual, of course).
Basic testing of the network speed
I'm used to measuring the bit rate using the if_rate utility that I adopted many years ago after it looked unmaintained. It's simple and convenient and supports a vmstat-like output that eases monitoring and logging during tests:
$ git clone https://github.com/wtarreau/if_rate
$ cd if_rate
$ make
$ ./bin/if_rate -l -i enp1s0 1
# time enp1s0(ikb ipk okb opk)
1716215181 0.0 0.0 0.0 0.0
1716215182 0.0 0.0 0.0 0.0
1716215183 0.0 0.0 0.0 0.0
1716215184 0.0 0.0 0.0 0.0
1716215185 28290.2 53104.4 3698500.4 305417.7
1716215186 89379.8 168813.3 9974752.8 823541.1
1716215187 90188.3 170637.7 9940160.9 820688.8
1716215188 90498.4 171225.5 9974026.0 823474.4
1716215189 89900.4 170266.6 9974559.0 823537.7
1716215190 90303.3 171029.9 10016757.3 827012.2
1716215191 88657.9 167802.2 9941214.6 820773.3
1716215192 90085.0 170447.7 9974735.9 823542.2
1716215193 90063.5 170265.5 10006728.4 826183.3
In parallel a simple netcat of /dev/zero was being sent over the network to another 10GbE machine on the network. So the NIC is properly able to saturate the wire on output, that's what we needed to check.
Installing everything in an enclosure
Choice of enclosure
One of my ancient file servers was based on an ATOM D510 and was installed in a really nice APLUS CS-CUPID 2 ITX enclosure, so I decided to remove the board and reuse this perfect enclosure:
Installing the SSDs
The enclosure comes with a frame to carry up to two 3.5" disks, so I needed a way to attach all 4 2.5" SSD frames to it. Since I had already done something similar in the past, it was easy to replicate. I measured that everything would fit with a 14mm pitch between devices, so I used my laser cutter to make a support:
Fixing the serial port
The serial port was fixed thanks to a metal square to which I screwed an unused piece of PCB on top of which the adapter was stuck using double-sided tape (it's not very clean but it works):
Final assembly
The final result looks like this, with the board, the 4 SSD, the 10GbE NIC and connector and the console connector at the bottom. The enclosure's power board is not used for now.
Running a full load test
Now that we have all the components together, let's see what the whole system is capable of. The aim will ultimately be to turn this into a real NAS server that will replace my local server currently running on an Odroid-H3 once it works fine with a mainline kernel. The indications based on the various measurements to date are that the device should be 10G-capable.
I didn't want to go into setting up NFS etc. So in order to verify the ability of the board to pull 10G from the disks to the network, I simply installed NGINX, created a RAID5 array on the disks, and created a set of 32 files 1GB in size so that the total work set cannot fit into memory (8G).
On the other machine (a Core2 Quad with a similar NIC), I started 32 instances of h1load each requesting its own file in loops on a single connection (to avoid the risk of reuse). In parallel I collected CPU usage, network bit rate and the I/O rate (that I turned to bits per second for the scale, by multiplying by 8). This gives this (I stopped the test around 4 minutes):
The fact that the disk I/O is always a bit lower than 10G is in part that there's ethernet, ip and TCP overhead (94.1% efficiency) and in part because depending on the load order, it may occasionally happen that some pieces are still in cache. Regardless, the 10G cable was full flat, and the CPU was at less than 50%, which indicates quite some headroom. More modern SSDs than these old ones would also probably do a better job at keeping the cable full.
Power measurements
Note that the following measurements were already reported on the Radxa's forum here. I finally replaced the SSDs with slightly more recent 2x intel X25M 160 GB and 2x intel 530 180 GB, and conducted a power measurement using various methods:
- feeding 12V into the motherboard's jack
- feeding 12V into the enclosure's power board (I only later noticed that it's supposed to be fed 19V, it should be tested again)
- feeding 12V into an aliexpress 12V jack-to-ATX "160W" adapter:
The power was measured using an ampmeter in series with the 12V connector, and a voltmeter in parallel at the closest possible to the connector:
Here are the measurements:
- 12V via the aliexpress ATX 160W adapter: 1.06 * 11.83 = 12.54W
- 12V via the enclosure’s adapter board: 1.16A * 11.95V = 13.86W
- 12V via the motherboard’s jack, ATX adapter still connected: 1.15A * 11.95V = 13.74W
- 12V via the motherboard’s jack, ATX unplugged: 1.02A * 11.96V = 12.20W
Thus the board’s power design looks extremely efficient, beating the other two. There’s 1.7W saved here by powering the board via its own jack instead of the enclosure’s adapter.
In addition I measured the individual power draw of various components, all powered from the motherboard’s jack since it's the most efficient:
- removed all SSDs: 0.81*12.03 = 9.74W => 2.45W drawn by the 4 SSDs in idle
- no SSD and 10GbE link down: 0.59 * 12.14W = 7.16W => the 10GbE RJ45 link draws 2.58W alone.
- no SSD, 10GbE adapter removed: 0.40 * 12.21 = 4.88W => the 10GbE adapter draws 2.28W with a link down, and 4.86W with a link up
Testing mainline kernel
Apparently according to Collabora's page, the SoC is now quite well supported. I'm interested in seeing how mainline behaves on this board because in my opinion using a BSP kernel is a showstopper for storing data. So I gave a quick try at kernel 6.9 but found no working DTB for now, the kernel is loaded and no single message is emitted. Usually this indicates a console mapped to a wrong address or a missing node for the UART. I tried to reuse a ROCK 5B DTS to see if it made any difference but no, everything seems to hang. I'll have to investigate later.
PCIe limitations
The organization of the PCIe lines around the SoC on this board is really great, should I say optimal. 2 Gen3 lines are used for SATA, two for M.2, and each 2.5GbE port uses one Gen2 line.
This means that we're having the same bandwidth between M.2 and SATA, so as long as the CPU is able to move the bytes, there's enough bandwidth to use M.2 for the 10GbE network adapter.
However I noticed that the MaxPayload is set to 128 bytes on all devices while they are all capable of 256:
root@rock-5-itx:~# lspci -vv | grep MaxPayload | cut -f1 -d,
DevCap: MaxPayload 256 bytes
MaxPayload 128 bytes
DevCap: MaxPayload 512 bytes
MaxPayload 128 bytes
DevCap: MaxPayload 256 bytes
MaxPayload 128 bytes
DevCap: MaxPayload 256 bytes
MaxPayload 128 bytes
DevCap: MaxPayload 256 bytes
MaxPayload 128 bytes
DevCap: MaxPayload 256 bytes
MaxPayload 128 bytes
DevCap: MaxPayload 256 bytes
MaxPayload 128 bytes
DevCap: MaxPayload 256 bytes
MaxPayload 128 bytes
All the "DevCap" lines indicate what the device is capable of. The other ones indicate what was negotiated. 128 bytes cause an efficiency of 81.8% (128/(128+26)*128/130), while 256 bytes would reach 89.4% (256/(256+26)*128/130) and achieve a 9.2% performance increase. It might be worth finding what is causing this limitation.
Closing words
That's all for now. This board is amazing from a hardware perspective. First, it looks extremely clean and well designed. Second, its I/O distribution makes an optimal use of the SoC's capability. The SoC doesn't heat that much and I managed to leave the fan disconnected during operation and this will be its target state anyway. The onboard DC-DC converters show a much higher efficiency than the two other options I tested, which also indicates a choice of great components.
I missed a reset button on the board, and a USB console connector on the back (there's not much room for this at the back, but maybe some combo connectors now exist with an extra USB-C connector that could appear above / below the RJ45 connectors for example, or maybe atop the existing USB-C one). If / when the board adopts a UEFI installer (the SPI NOR still remains empty), then the console will no longer be needed.
One point that I really disliked is the annoying Roobi installer that made everything more complicated than usual. Furthermore, the fact that it confiscates the only storage available to put an operating system seriously needs to be revisited. This is a totally bogus choice. I'm definitely not going to install the OS on a micro-SD, that's the place for an installer. And I'm not going to put the OS on a data disk either. Having had to deal with that painful experience in the past, making it super complicated to exchange data disks when trying to recover data or just for a migration, I seriously don't want to do that again.
Thomas Kaiser showed me that Radxa recently merged a patch in their kernel to disable the eMMC, and only reserve it for the installer. Not only this makes no sense, but it just voids the interest of the product for me if it only leaves me with the option to remove one data disk and cut the I/O performance and storage capacity by 1/3. And by the way, the 16MB SPI NOR is still unused and 16MB is plenty to store an installer, I'm personally stuffing full-featured OS on that on other machines!
Just like with the ROCK 5B, I'll keep this device reserved for testing for as long as it will not have an LTS mainline kernel available (likely by the end of the year). I've ordered new SSDs to run better tests, and I'll have to run many more tests and also to test other distros (notably Slackware ARM64). I'll also evaluate how it deals with HTTP/HTTPS load balancing now that it has a good NIC ;-)