Showing posts with label Small server. Show all posts
Showing posts with label Small server. Show all posts

2025-08-24

Redudant power supply for home servers and devices

A well known story

All those who run a small infrastructure at home know this problem. It's Monday morning, while drinking your first coffee you're checking your e-mails, you find that you received surprisingly few during the night, and that the last one was from 5 hours ago. Then this well-known feeling starts to build up: "what is it this time? FS full, unregistered from all mailing lists at once, missed a domain renewal notification, or a machine died?". At this point you finish drinking your coffee quickly because you know that it will cool down faster than you'll get the problem fixed, and you start to ping various machines and devices while chewing some biscuits, to discover (in any order of preference):

  • a reverse proxy no longer responding
  • many machines no longer responding (likely indicating a switch issue)
  • the router not responding

Then you go to the office/lab/basement/garage/wherever the machines are located, and start debugging in underwear, thinking that it's really not a great moment because you planned to arrive early at work to prepare some stuff before a meeting...

Finally the culprit is almost always the same: a dead power brick whose LED (if any) blinks slowly indicating a dead input capacitor, or not at all anymore:

Then starts the moment of removing the dust from the sticker to find the voltage and amperage, and open all trays to find an almost equivalent one which should hopefully get the job done even if it's half the amperage because you know your devices are not pulling that much... And when you want to connect it, you notice that its connector is a 2.1mm inner while the previous one was a 2.5mm. But by forcing a lot you manage to establish a contact and consider that it will be sufficient for the time it takes to order another power block, then you can go to the shower.

There is a variant to that story: you're working in your office, notice the light flickering, and realize you had a micro power outage on the mains. Most of your small servers didn't notice, but one wasn't that lucky and experienced a brownout. You decide that enough is enough, it's really time to connect all of them to the UPS, but the UPS has too few outputs, all on C13 plugs and you have to modify a pair of power strips to install a C14 connector on them in order to connect all your small devices:

And once that's done, you discover the day your UPS fails to take over a short power cut and want to remove it, that you have no other C13 power strip to which you can connect the C14-equipped strips, something like this that I once had to build exactly because of this:


It's got only marginally better with USB power delivery because when a power brick dies, it's often easier to find another one (but rarely a high power one), or you can sometimes temporarily daisy-chain the device to another device (provided that it was not itself already daisy-chained).

All of this sounds familiar ?

Root cause 

The reason to all these problems is the multiplicity of low-power devices which all come with their own power block, each requiring a distinct mains outlet. And sometimes angled ones cannot even be placed close to each other and are masking some outlets. You quickly end up with this for 12V supplies:

And USB is not much better, all having to live together on the same power strips:


Oh by the way, for USB nowadays there is something way more appealing, the ubiquitous multi-output QC charger: 

Except you'll only try it once for servers, until you realize that it's a charger and not a power supply, and the difference is that each time you connect or disconnect something from a port, it renegotiates the voltages with all other ports, which are all cut for a second or two! That's absolutely not a problem to charge a laptop. But it is when you imagine powering multiple always-on devices from it.

The solution is in fact to set up a power distribution system which requires only one input. However if this one fails, it will be even worse, so it needs to be redundant. And if it's redundant, it can also be connected behind the UPS to get protection, as well as directly to mains (or another UPS) to survive a UPS failure.

Design of a solution

In my case I counted the number of devices I would need to connect there. It's roughly 16 in the rack, counting servers and switches. The total power is very low, around 70W, which means that I can use fanless power supplies.

Most devices take 12V on input. Some other use micro-USB and others USB-C.

I considered having multiple 12V power rails so that a short circuit would only affect a part of the components. I had found the perfect chip for that: TPS259571DSGR. It's a really nice electronic fuse, it supports programmable 0.5 to 4A and automatically re-triggers after a short. But this one comes in a tiny 2mm-wide WSON package with pins spaced by 0.25mm, and after trying for a few hours to solder one on a PCB I purposely made, I decided to postpone because my PCB quality is not good enough, at this level of thinness you definitely need a solder mask or it quickly shorts. I since ordered some PCB adapters for DIL to WSON and will try again later. I would love to spot an equivalent in SOP8 package! In the mean time I finally decided that all 12V connectors will be connected together and that this should be OK. I chose 5.5x2.1mm female jacks for which I will make male-male cables that will connect either to 2.1 or 2.5mm depending on what is needed:

For the USB outputs, we now find a number of QC-compatible multi-port USB adapter boards like this one. They are in fact 4 independent power supplies connected to the same input. They're convenient because you can also use them to extract a fixed voltage (e.g. 12V) using a tiny adapter. I decided to use one to provide 4 USB-A ports and another one for 4 USB-C ports. On another project (controlled USB outputs) I had successfully stacked two USB-C ones and that's what I initially intended, but drilling holes is a real pain and I didn't need that many ports at that moment:

For the power supplies, I thought that blocks designed for LED would be a good fit. These are not very well regulated because they focus on power and not on a perfect voltage. But nowadays their regulation is pretty good, the voltage is accurate to +/- 5% usually, which is much better than what 12V devices accept on input. Contrary to a PC power supply which must deliver a very stable voltage, here the 12V output is never used as-is but passes through other DC-DC regulators, and usually anything between 10 and 14 will be OK. The advantage is that such power supplies are simple, small and very efficient, like the one below that can run fanless up to around 500W. I found various models here and here which looked appealing:

I decided to place the power switch after the PSU, not before, in order to isolate a faulty one from the system. The idea here is probably a consequence of the trauma of replacing faulty power supplies: I want to be able to replace a dead PSU without turning everything off. The switch on the output allows to isolate a PSU from the circuit and replace it. 

Switching power supplies can be connected in parallel. But if for any reason one dies with its output short, it will bring the second one with it. Also, there's no way to know that one is dead when they're in parallel. So I decided to think about a circuit that would connect them together (just two diodes) and also report which ones are working or not (LED before the diode).

My concern was to find diodes that could stand high current but I had not much difficulty finding 20A diodes and stopped on 20SQ045. And since I didn't want to have many LEDs on the front, I had fun scratching my head a little bit to combine colors on common cathode RGB LEDs in order to report the various possible states among:

  • all down (off)
  • this PSU is unconnected or dead (red)
  • this PSU is connected but not enabled, no output (blue)
  • this PSU is connected but not enabled, output from other PSU (purple)
  • this PSU is connected and enabled but diode is dead (cyan)
  • this PSU is not connected and diode is short-circuited (orange) 
  • this PSU is connected and delivering power (green)

All this only with passive components. The final diagram is here:

and the trivial PCB here:

Both can be downloaded in eagle format from my GitHub repository.

I just decided to place everything on the copper side so that I could leave it flat on the bottom of the enclosure.

Construction

Once I received all the components, I started assembling everything. As usual for me the most difficult is dealing with hardware (drilling, cutting etc). I think I did reasonably well overall on this one, without even scratching the front panel:




OK, the holes for the jacks could have been better centered...

It's made of two aluminum corners constituting the front and back panels, screwed to an MDF plate. The MDF is interesting for being an insulator, and also because it's easier to cut than a metal plate:

The PCB was made with my laser engraver with all components soldered on the copper side. The power diodes had their legs rolled as this slightly helps spread the heat if needed. The copper pads were large to stand high currents and permit to be generous with the solder for the large wires:




All the cabling was done using 2.5mm² wire made for house circuitry. It supports 16A under 250V, which means it will not heat enough to melt the insulator over many meters in your walls. Here on short distances like this even at 20A it will become barely warm. And I don't intend to reach 20A anyway. The advantage of using such wires is that they're rigid and make excellent contact on solder joints. Two were stripped and used as bus bars on the jack connectors. Overall I find that the result is not bad at all:

 

Tests

Tests are reasonably simple, I just operated with all 4 combinations of on/off state for the two switches, multiplied by the 4 combinations of on/off state for the mains inputs. I could confirm that the colors are as intended (not well reflected on the photo):






Installation

I initially planned on installing this horizontally in my rack, but found that it was even better vertically on its external side. It allows me to see the LEDs, it helps with cables distribution, improves ventilation and eases operation and checks if ever needed, though for now both power blocks remain cold to the touch:

I have not checked if the overall power consumption has reduced or not. It would be very possible since every power block has a minimum current leak, at least to power the oscillating circuitry. But that should be marginal. What could make a higher difference is the expected higher power conversion ratio of such high-power blocks which can reach 92-94% compared to very low-power ones which rarely aim beyond 70-80%. Anyway I'm not going to reconnect everything just to check!

Amusingly, initially I connected the two inputs on the same UPS, and forgot about it the day I decided to turn the UPS off for a repair... That's when I decided that only one input would be connected to the UPS and the other one directly to the mains. It's also convenient to use color tape on your power strips to indicates which ones are UPS-protected and which ones are not. I'm using red for UPS and blue for mains.

Now let's see how long my devices stay up.

2024-05-21

An affordable 10GbE capable NAS

Background

Two weeks ago, in early May, Tom Cubie of Radxa asked me if I would be interested in testing their new ROCK 5 ITX board. I've followed a little bit the development of this really nice board, and Tom probably valued some of my tests and comments during the ROCK 5B debug party. And he knows I'm not shy when I disagree with certain choices, and he confirmed he doesn't seek a complacent review at all, so that was enough for me to accept the offer, because I'm one of those who believe that making issues public is the most efficient way to collectively find the best solution to them.

First impressions

Unpacking

A week later I received a parcel from DHL. First impression, the board is packaged just like a PC mother board. There's a rationale for this, it's an ITX form factor, aiming at being installed in a PC enclosure. The customer must feel in known territory when installing it ;-)



Front panel and buttons

Like on a PC mother board, there's a front panel connector with Power switch, Power LED, Reset switch, HDD LED. These are nice improvements since many Arm-based boards are missing a reset button which would be appreciated during development and kernel porting. However, here a button could have been placed on the right of the SPDIF connector for debugging periods when the card is left on a desk without anything connected to this front panel connector. That's no big deal since a screw driver suffices to make contact to the pins, but it would be cleaner. Some PC mother boards have adopted this principle nowadays by the way:

Installing a cooling solution

One excellent design choice on this board concerns the cooling. Radxa adopted a design compatible with LGA115x thermal solutions. This means that instead of having to resort to mixes of inefficient and inconvenient solutions as is often the case, here reusing an old PC heat sink will work. I found one from an old 1U server in my tray and decided to install it. It even has the PWM pin to control the fan's speed (which I won't use except for testing).

Installing a serial console

Another point to note regarding the connectors is that there is no externally accessible serial console, though there's a pin header on the board next to the micro-SD connector. Serial connectors are still needed in the Arm world because of the boot loaders. While you can often do most of the day-to-day operation using an HDMI display and a USB keyboard, each time the machine fails to boot, the only option is to pick the screw driver, open the box and connect a UART connector inside to fix the boot problem. This tends to be less of a problem with systems adopting the Arm SystemReady approach which provides a PC-like UEFI BIOS where you can really control everything from the early boot, and normally don't have to fiddle with low-level commands just to load a recovery kernel. Here there's no UEFI at the moment so the only recovery option is the serial port. And in general I don't like the idea of having to plug/unplug a screen connector and move it around in the rack between all my machines, it even happens quite often that a sick machine fails to enable the frame buffer and display anything. The USB serial console is much easier to use and allows for multiple machines to cross-connect so that all your machines in the rack are accessible at the same time from the same display.

I noticed that the connector has the same pinout as more and more boards I'm seeing these days, so I could reuse an adapter I prepared for another board. This one is cheap and based on a CH340N. It's tiny, only requires to solder 3 wires, doesn't cause trouble when not powered, and support Rockchip's speed of 1.5 Mbps.

Surprise of the power connector

When trying to plug the 12V adapter cable that was lying on my desk, I noticed it wouldn't enter. I looked closer and saw a huge central pin. Grrr... it's a 2.5mm one. These ones are really really not common. I checked all my adapters here (about 20, all voltages included). None of them had a 2.5mm connector, all were 2.1mm. Fortunately I had a 2.5mm male jack and a 2.1mm female one, so I could make an adapter to connect the 12V power block.

I suspect that the reason for using a larger connector than usual is to make sure users don't accidentally connect a laptop 19V power input. That's understandable of course. Another option could be to make the board accept a wider input voltage range. Some PC boards do this. For example some boards will take 8 to 25V, and will only need more than 12 if they really need to deliver 12V. Most of the time the 12V pins are not used, they're basically only used for SATA spinning disks, but most often not even for SSDs.


First power up

Once plugged in, the console immediately shows that it sees 8 GB of LPDDR5 DRAM installed as 4 banks of 2 GB each and that the speed is configured to 2400 MHz, hence 4800 MT/s:

DDR 9fffbe1e78 cym 24/02/04-10:09:20,fwver: v1.16
LPDDR5, 2400MHz
channel[0] BW=16 Col=10 Bk=16 CS0 Row=16 CS=1 Die BW=16 Size=2048MB
channel[1] BW=16 Col=10 Bk=16 CS0 Row=16 CS=1 Die BW=16 Size=2048MB
channel[2] BW=16 Col=10 Bk=16 CS0 Row=16 CS=1 Die BW=16 Size=2048MB
channel[3] BW=16 Col=10 Bk=16 CS0 Row=16 CS=1 Die BW=16 Size=2048MB
Manufacturer ID:0x6
CH0 RX Vref:29.7%, TX Vref:19.0%,0.0%
CH1 RX Vref:31.0%, TX Vref:19.0%,0.0%
CH2 RX Vref:31.8%, TX Vref:20.0%,0.0%
CH3 RX Vref:28.5%, TX Vref:20.0%,0.0%
change to F1: 534MHz
change to F2: 1320MHz
change to F3: 1968MHz
change to F0: 2400MHz

The system starts to boot to an pre-installed debian 11 image and presents a login prompt showing that the host name is called "roobi". The console is polluted a bit by some bluetooth messages (there's no BT device on this board):

[   24.409994] dma-pl330 fea30000.dma-controller: fill_queue:2263 Bad Desc(2)
[   24.463642] dma-pl330 fea30000.dma-controller: fill_queue:2263 Bad Desc(2)
[   24.685977] Bluetooth: hci0: command 0xfc18 tx timeout

Debian GNU/Linux 11 roobi ttyFIQ0

roobi login: [   27.981985] dma-pl330 fea30000.dma-controller: fill_queue:2263 Bad Desc(2)
[   32.792957] Bluetooth: hci0: BCM: failed to write update baudrate (-110)
[   32.793062] Bluetooth: hci0: Failed to set baudrate
[   34.926158] Bluetooth: hci0: command 0x0c03 tx timeout
[   42.819615] Bluetooth: hci0: BCM: Reset failed (-110)

roobi login:

At this point I tried many combinations of"root/root", "rock/rock", "radxa/radxa", "roobi/roobi", but none worked, so I looked on the net and couldn't find any relevant info. When rebooting I noticed that u-boot proposes a second boot choice:

U-Boot menu
1:	Debian GNU/Linux 11 (bullseye) 5.10.110-33-rockchip
2:	Debian GNU/Linux 11 (bullseye) 5.10.110-33-rockchip (rescue target)
Enter choice: 2

But the result is basically the same, no valid login/password found:

Cannot open access to console, the root account is locked.
See sulogin(8) man page for more details.

Press Enter to continue.
[   24.167338] Bluetooth: hci0: command 0xfc18 tx timeout
[   32.060943] Bluetooth: hci0: BCM: failed to write update baudrate (-110)
[   32.061077] Bluetooth: hci0: Failed to set baudrate
[   34.194156] Bluetooth: hci0: command 0x0c03 tx timeout
[   42.087379] Bluetooth: hci0: BCM: Reset failed (-110)


[   87.668622] dma-pl330 fea30000.dma-controller: fill_queue:2263 Bad Desc(2)
[   87.735524] dma-pl330 fea30000.dma-controller: fill_queue:2263 Bad Desc(2)


Debian GNU/Linux 11 roobi ttyFIQ0

roobi login: [   91.286888] dma-pl330 fea30000.dma-controller: fill_queue:2263 Bad Desc(2)

Request for help and first surprises

Since the board is pretty new, I suspected that the login/pass were well-known by a few users and not yet put into an easy to find documentation. I knew that a few other people had got their hands on this board as well, so I asked for help on the Radxa forum. Thomas Kaiser responded, suggesting that apparently I was not supposed to log in there, because this "roobi" image was in fact an installer that's supposed to be used via a keyboard+display or via a browser. Some doc for it is found here.

A first feeling of over-engineering and needless complexity started to build up. Usually it's as simple as downloading an image of choice, writing it on a micro-SD or USB thumb drive, plugging it and booting off it, and you're done. I generally don't like installers that tend to half-work and not to let you decide what nor how you install, or easily leave the system in an unrecoverable state. Thomas found in the image build scripts that a user "ps/ps" is created.

I tried it and this user worked, even though it shows some syntax errors in some scripts:

roobi login: ps
Password: 
Linux roobi 5.10.110-33-rockchip #65700d485 SMP Wed Apr 3 04:26:57 UTC 2024 aarch64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Fri May 17 19:29:29 UTC 2024 on tty1
-bash: [: : integer expression expected
-bash: [: : integer expression expected
-bash: [: : integer expression expected
ps@roobi:~$

This account is allowed to sudo and from there it's possible to create new users and install packages, so I could run some frequency tests and DRAM latency tests locally. BTW, sshd is already present and enabled, and gcc is there as well.

After all these tests were done, I tried to connect using a browser to this machine's IP on port 80. I was presented with an installation screen, that required me to contact the power connector 3 times to validate that I was on the right board:

Due to remote access to this device, authentication is required.
After clicking the start button, please press the power button
three times within 60 seconds to complete the verification.

Note that I still had a root shell on this machine over the network, so it's definitely not a security measure, most likely it's just a way to avoid mistakes when installing multiple boards in parallel.

From there I could choose to install one between two possible debian images (how one is supposed to install other operating systems with this is unknown for now, maybe one needs to enter a specific URL, still complicated when you already have your image on an SD),.

When choosing the installation target, the installer lists available block devices. Here, "no device available" is displayed.

Thomas suggested that Radxa did not intend for the eMMC to be usable by the end user and instead waste it to host this absurdly huge installer (a full fledged debian distro). Now this is where it sounds absurd: they design a really great device, that corresponds exactly to the design one would expect as a server, with the right choice of connectivity, storage and extensions, but someone passes after this and says "no, please leave me the eMMC, I would like the installer to stay there forever so that I can spend my life reinstalling this board every day".

OK, I'm a bit sarcastic, but why suddenly ruin valid and efficient use cases for the sake of keeping an installer there that you'll need only once in the product's life ? Micro-SD is made for this! There are so many owners of competing boards that lack eMMC and that are asking to get one that it's really not morally acceptable to sacrifice the eMMC for an unused installer.

Migration of the installer to SD

The eMMC's layout is the following:

  • 0 to 16MB: U-Boot, before the first partition 
  • 16 to 32MB: partition 1, 16MB FAT, mounted in /config
  • 32 to 332MB: partition 2, 300 MB EFI, unused for now
  • 332MB to 7.3G: partition 3, 7G ext4, debian 11 for installer

More precisely it looks like this:

$ sudo fdisk -l /dev/mmcblk0
Disk /dev/mmcblk0: 7.28 GiB, 7818182656 bytes, 15269888 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 388778EF-E065-4B77-A699-D0C2B1832E62

Device          Start      End  Sectors  Size Type
/dev/mmcblk0p1  32768    65535    32768   16M Microsoft basic data
/dev/mmcblk0p2  65536   679935   614400  300M EFI System
/dev/mmcblk0p3 679936 15269854 14589919    7G EFI System

Let's have a look at what options are offered to us by U-Boot. For this, we need to enter U-Boot. It interrupts during the first second when it lists the available system images, just pressing Ctrl-C returns to the U-Boot prompt:

=> printenv
... boot_targets=nvme0 mmc1 mmc0 mtd2 mtd1 mtd0 usb0 pxe dhcp
...
distro_bootcmd= ... for target in ${boot_targets}; do run bootcmd_${target}; done

OK so the boot loader will check mmc1 (micro-SD) before mmc0 (eMMC). Let's just copy eMMC to a 8 GB micro-SD and try again. The boot loader will still be loaded from the eMMC (first 16 MB), and the kernel loaded from the SD. We'd like the OS to load from the SD as well, so for this we'll need to rename the UUID on the SD and adjust it in the extlinux.conf file so that only the SD is used. Let's reboot to the installer, insert a 8 GB minimum micro-SD, and prepare it this way:

roobi login: ps
Password:
ps@roobi:~$ sudo -s

# eval $(blkid -o export /dev/mmcblk0p3)
# echo $UUID
b055efba-0f72-448b-927c-07f40f2714c8
# NEW_UUID=$(cat /proc/sys/kernel/random/uuid)
# echo $NEW_UUID
5ee2e87c-0db4-4235-a5ec-aabe607d9c48

# dd if=/dev/mmcblk0 of=/dev/mmcblk1 bs=1M status=progress
# cat /proc/partitions
# e2fsck -f /dev/mmcblk1p3
# tune2fs -U $NEW_UUID /dev/mmcblk1p3
# mount /dev/mmcblk1p3 /mnt/
# sed -i -e "s/$UUID/$NEW_UUID/g" /mnt/boot/extlinux/extlinux.conf
# umount /mnt/
# reboot

Now the system reboots from the micro-SD. Logging into the system shows that it has only mounted the SD (mmcblk1) and not the eMMC (mmcblk0).

Installation to eMMC

Using the browser again to connect to the installer shows that now it properly lists /dev/mmcblk0 as an available installation device. Yay!

However when trying to continue, it starts to scare you by suggesting that everything will be wiped:

This made me hesitate for a while, but I thought it wouldn't make sense to wipe the boot loader parts on a target device from which the system later hopes to possibly boot (e.g. a micro-SD). So I finally confirmed and it installed on it. It took a few minutes after which it automatically rebooted. Of course, since I had left the SD card in, it booted again on the installer, but this showed me that it hadn't wiped the boot loader at least. Removing the SD card and booting again this time ended up with the Debian 11 prompt corresponding to the new image:

Debian GNU/Linux 11 rock-5-itx ttyFIQ0


roobi login: rock
Password:
Linux rock-5-itx 5.10.110-33-rockchip #65700d485 SMP Wed Apr 3 04:26:57 UTC 2024 aarch64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
rock@rock-5-itx:~$

The contents of /etc/passwd show that radxa/radxa is also valid. It's nice to see that the distro is not too fat and that there's plenty of room left to install whatever one wants to install on their NAS:

$ df
Filesystem 1K-blocks Used Available Use% Mounted on
udev 3929848 0 3929848 0% /dev
tmpfs 813580 1700 811880 1% /run
/dev/mmcblk0p3 7116452 1396500 5393508 21% /
tmpfs 4067900 0 4067900 0% /dev/shm
tmpfs 5120 4 5116 1% /run/lock
/dev/mmcblk0p1 16112 1 16111 1% /config
tmpfs 813580 4 813576 1% /run/user/1001

Pfew... that was a bit more complex than usual and than needed but it was worth it in the end. Now we have the target operating system set up and running on the board. One annoying point with such an installer is that you copy a pre-installed distro to your system, you're not offered the choice to choose the FS layout for example. It's as if you just used "dd" to dump the .img to the target device.

In fact what should be done with this installer is that it lets you select an installation image for the distro of your choice, that you deposit on a micro-SD and that you then boot from to install on the target device(s).

Connecting SATA SSDs

For a NAS, one needs to have storage devices and cabling. I didn't intend to receive the board that fast and I thought I didn't have power cables. But after digging in my boxes, I managed to find sufficient cable converters to connect 4 devices. I did have 4 used 120 GB intel SSD. They're not extremely fast but at least they do work so I started to hack with them. The result looks like an ugly octopussy :-)




Running a simple basic test of all disks at once

I'd intend to use these SSDs in RAID5, as an NFS server doing mostly reads though writes will be needed as well of course. Let's first see what the whole devices are capable of in terms of read speed. For this I'm just running vmstat 1 while reading from 1 disk first, then from all 4 disks at once:

# dd if=/dev/sda of=/dev/null bs=1M

$ vmstat 1
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
0 0 0 7851296 8552 95092 0 0 0 0 110 114 0 0 100 0 0
1 0 0 7625296 233644 95240 0 0 224800 0 972 757 0 5 95 0 0
0 1 0 7301748 556392 95900 0 0 323072 0 1283 1053 0 5 95 1 0
1 0 0 6977948 879464 96804 0 0 323072 0 1289 1073 0 5 95 1 0
0 1 0 6653140 1203560 97540 0 0 324096 0 1301 1080 0 5 94 1 0
0 1 0 6328852 1527144 98204 0 0 323584 0 1289 1084 0 5 95 1 0

# for i in a b c d; do dd if=/dev/sd$i of=/dev/null bs=1M & done

$ vmstat 1
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
0 0 0 7848444 8548 93428 0 0 0 0 97 116 0 0 100 0 0
0 4 0 6773340 1082724 95060 0 0 1074224 0 3942 3252 0 17 53 30 0
1 3 0 5573384 2279780 97580 0 0 1197056 0 3932 3607 0 8 49 42 0
1 3 0 4365080 3485540 100276 0 0 1205760 0 3962 3657 0 7 50 43 0
2 2 0 3155736 4696504 103184 0 0 1210880 0 4055 3668 0 10 49 41 0
0 4 0 1927256 5922148 105972 0 0 1225728 0 4043 3705 0 8 50 42 0
0 4 0 711880 7135076 108428 0 0 1212928 0 4010 3670 0 9 49 42 0
3 1 0 294572 7548632 110632 0 0 1211904 0 4194 3993 0 11 49 40 0

This test was running on the big cores, which were mostly waiting for the disks (42%) and using little CPU (8-10%). Running on the little cores instead showed almost the same performance:

# for i in a b c d; do taskset -c 0-3 dd if=/dev/sd$i of=/dev/null bs=1M & done

$ vmstat 1
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
0 0 0 7851584 8548 94188 0 0 0 0 98 102 0 0 100 0 0
2 2 0 7679972 179556 93864 0 0 171076 0 819 530 0 10 89 1 0
1 3 0 6508936 1348452 96700 0 0 1168896 0 3931 3619 0 19 72 8 0
3 1 0 5343568 2511204 98952 0 0 1162752 0 3852 3595 0 17 74 9 0
3 1 0 4177208 3674548 101952 0 0 1163264 0 3837 3617 0 18 73 9 0
4 0 0 2993412 4855344 104268 0 0 1180672 0 3951 3660 0 19 72 9 0
0 4 0 1807820 6038372 107168 0 0 1183232 0 3953 3650 0 18 73 9 0
1 3 0 630820 7212876 109680 0 0 1174016 0 3931 3663 0 19 72 9 0

Here the CPU is used approximately at twice the load, and iowait is much smaller, indicating that there's less margin on the CPU. But the performance is almost the same, at 1.17 GB/s vs 1.21 for the big cores.

One will note that this bandwidth is slightly smaller than the sum of all 4 SSDs (4*323 = 1292 MB/s vs 1210 MB/s measured). Could we be hitting a wall ? Note that for 10 GbE it's fine because that's slightly above the limit of what one can transfer over TCP at 10 Gbps. But it's still interesting to know.

The SATA controller is connected using 2 PCIe 3.0 lanes. Each lane is running at 8 GT/s, encoded using 128/130 coding (128 bits transported over 130 bits). The MaxPayload is at the minimum, 128 bytes:

root@rock-5-itx:~# lspci -nnvv -s 0001:11:00.0 | grep -A2 Ctl:
DevCtl: CorrErr- NonFatalErr- FatalErr- UnsupReq-
RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
MaxPayload 128 bytes, MaxReadReq 256 bytes
--
LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
LnkSta: Speed 8GT/s (ok), Width x2 (ok)

This is a 64-bit system so the PCIe overhead is 26 bytes. Thus each transfer of 128 bytes requires extra 26 bytes, that's 154 bytes total, or a transfer efficiency of 83.1%, which becomes 81.8% once the wire encoding is taken into account. Thus the absolute maximum bandwidth with the SATA controller, not taking into account commands and control stuff, is 81.8% of 2*8 GT/s = 13.09 Gbps, or 1.63 GB/s or 1.52 GiB/s. We're not there yet, only at ~75% but the margin is small. Anyway, as previously said, we're already reaching the speed that at 10 GbE Ethernet adapter could deliver, so there's nothing lost here, and the choice of assigning 2 lanes to the SATA controller was the best one.

Connecting a 10 GbE NIC

For a 10 GbE NAS, one will need a 10 GbE NIC. I already had such a device that I bought for testing the ROCK 5B, and knew that it now works pretty well. The NIC fits nicely into the M.2 adapter, and has fixature for the RJ45 port to use a regular slot:

This NIC is made around a chip ACQ107 and requires the module called "atlantic" that's enabled via "CONFIG_AQTION" under "CONFIG_NET_VENDOR_AQUANTIA". Unfortunately it's not enabled in the default kernel. I needed to rebuild the kernel, but it's a bit complicated to find the relevant sources. The wiki says that there's a bsp tool used to download, patch and build the kernel. I already have tons of kernel versions here ("git branch | wc -l" reports more than 400) and having to go through such a process when you know you already have most of it, and having to learn yet another tool is not the most developer-friendly solution IMHO.

With that said, at least "bsp" is a clean and readable script, at least it's not a Python horror that requires to download half of Github. But it's still annoying to have to read a script to figure what directory's config file to read to find the kernel's URL. I found a few candidate URLs, one of them being Joshua Riek's rk-5.10-rkr6. (I later found that it was apparently linux-5.10-gen-rkr3.4 from Radxa's repo that ought to be used).

I also noticed a 6.1 kernel ("linux-6.1-stan-rkr1"). I tried this one first, at least to gauge the progress of the porting to newer kernels. This one properly detected the 10GbE NIC but not the SATA, there were PCIe errors and I'm wondering if it properly applies the bifurcation to see two distinct devices. It was a bit late at night, so I quickly gave up. Instead I tried with "rk-5.10-rkr6" that's based on 5.10.160 (vs 5.10.110 for rkr3.4), and everything works there, both SSD and the 10GbE NIC!

BTW for those interested in using the repositories above, issuing "make rockchip_defconfig" is all you need to do to configure the base kernel (after setting your ARCH and CROSS_COMPILE as usual, of course).

Basic testing of the network speed

I'm used to measuring the bit rate using the if_rate utility that I adopted many years ago after it looked unmaintained. It's simple and convenient and supports a vmstat-like output that eases monitoring and logging during tests:

$ git clone https://github.com/wtarreau/if_rate
$ cd if_rate
$ make
$ ./bin/if_rate -l -i enp1s0 1
# time enp1s0(ikb ipk okb opk)
1716215181 0.0 0.0 0.0 0.0
1716215182 0.0 0.0 0.0 0.0
1716215183 0.0 0.0 0.0 0.0
1716215184 0.0 0.0 0.0 0.0
1716215185 28290.2 53104.4 3698500.4 305417.7
1716215186 89379.8 168813.3 9974752.8 823541.1
1716215187 90188.3 170637.7 9940160.9 820688.8
1716215188 90498.4 171225.5 9974026.0 823474.4
1716215189 89900.4 170266.6 9974559.0 823537.7
1716215190 90303.3 171029.9 10016757.3 827012.2
1716215191 88657.9 167802.2 9941214.6 820773.3
1716215192 90085.0 170447.7 9974735.9 823542.2
1716215193 90063.5 170265.5 10006728.4 826183.3

In parallel a simple netcat of /dev/zero was being sent over the network to another 10GbE machine on the network. So the NIC is properly able to saturate the wire on output, that's what we needed to check.

Installing everything in an enclosure

Choice of enclosure

One of my ancient file servers was based on an ATOM D510 and was installed in a really nice APLUS CS-CUPID 2 ITX enclosure, so I decided to remove the board and reuse this perfect enclosure:



Installing the SSDs

The enclosure comes with a frame to carry up to two 3.5" disks, so I needed a way to attach all 4 2.5" SSD frames to it. Since I had already done something similar in the past, it was easy to replicate. I measured that everything would fit with a 14mm pitch between devices, so I used my laser cutter to make a support:



Fixing the serial port

The serial port was fixed thanks to a metal square to which I screwed an unused piece of PCB on top of which the adapter was stuck using double-sided tape (it's not very clean but it works):


Final assembly

The final result looks like this, with the board, the 4 SSD, the 10GbE NIC and connector and the console connector at the bottom. The enclosure's power board is not used for now.


Running a full load test

Now that we have all the components together, let's see what the whole system is capable of. The aim will ultimately be to turn this into a real NAS server that will replace my local server currently running on an Odroid-H3 once it works fine with a mainline kernel. The indications based on the various measurements to date are that the device should be 10G-capable.

I didn't want to go into setting up NFS etc. So in order to verify the ability of the board to pull 10G from the disks to the network, I simply installed NGINX, created a RAID5 array on the disks, and created a set of 32 files 1GB in size so that the total work set cannot fit into memory (8G).

On the other machine (a Core2 Quad with a similar NIC), I started 32 instances of h1load each requesting its own file in loops on a single connection (to avoid the risk of reuse). In parallel I collected CPU usage, network bit rate and the I/O rate (that I turned to bits per second for the scale, by multiplying by 8). This gives this (I stopped the test around 4 minutes):


The fact that the disk I/O is always a bit lower than 10G is in part that there's ethernet, ip and TCP overhead (94.1% efficiency) and in part because depending on the load order, it may occasionally happen that some pieces are still in cache. Regardless, the 10G cable was full flat, and the CPU was at less than 50%, which indicates quite some headroom. More modern SSDs than these old ones would also probably do a better job at keeping the cable full.

Power measurements

Note that the following measurements were already reported on the Radxa's forum here. I finally replaced the SSDs with slightly more recent 2x intel X25M 160 GB and 2x intel 530 180 GB, and conducted a power measurement using various methods:

  • feeding 12V into the motherboard's jack
  • feeding 12V into the enclosure's power board (I only later noticed that it's supposed to be fed 19V, it should be tested again)
  • feeding 12V into an aliexpress 12V jack-to-ATX "160W" adapter:


The power was measured using an ampmeter in series with the 12V connector, and a voltmeter in parallel at the closest possible to the connector:


Here are the measurements:

  • 12V via the aliexpress ATX 160W adapter: 1.06 * 11.83 = 12.54W
  • 12V via the enclosure’s adapter board: 1.16A * 11.95V = 13.86W
  • 12V via the motherboard’s jack, ATX adapter still connected: 1.15A * 11.95V = 13.74W
  • 12V via the motherboard’s jack, ATX unplugged: 1.02A * 11.96V = 12.20W

Thus the board’s power design looks extremely efficient, beating the other two. There’s 1.7W saved here by powering the board via its own jack instead of the enclosure’s adapter.

In addition I measured the individual power draw of various components, all powered from the motherboard’s jack since it's the most efficient:

  • removed all SSDs: 0.81*12.03 = 9.74W => 2.45W drawn by the 4 SSDs in idle
  • no SSD and 10GbE link down: 0.59 * 12.14W = 7.16W => the 10GbE RJ45 link draws 2.58W alone.
  • no SSD, 10GbE adapter removed: 0.40 * 12.21 = 4.88W => the 10GbE adapter draws 2.28W with a link down, and 4.86W with a link up

Testing mainline kernel

Apparently according to Collabora's page, the SoC is now quite well supported. I'm interested in seeing how mainline behaves on this board because in my opinion using a BSP kernel is a showstopper for storing data. So I gave a quick try at kernel 6.9 but found no working DTB for now, the kernel is loaded  and no single message is emitted. Usually this indicates a console mapped to a wrong address or a missing node for the UART. I tried to reuse a ROCK 5B DTS to see if it made any difference but no, everything seems to hang. I'll have to investigate later.

PCIe limitations

The organization of the PCIe lines around the SoC on this board is really great, should I say optimal. 2 Gen3 lines are used for SATA, two for M.2,  and each 2.5GbE port uses one Gen2 line.

This means that we're having the same bandwidth between M.2 and SATA, so as long as the CPU is able to move the bytes, there's enough bandwidth to use M.2 for the 10GbE network adapter.

However I noticed that the MaxPayload is set to 128 bytes on all devices while they are all capable of 256:

root@rock-5-itx:~# lspci -vv | grep MaxPayload | cut -f1 -d,
DevCap: MaxPayload 256 bytes
MaxPayload 128 bytes
DevCap: MaxPayload 512 bytes
MaxPayload 128 bytes
DevCap: MaxPayload 256 bytes
MaxPayload 128 bytes
DevCap: MaxPayload 256 bytes
MaxPayload 128 bytes
DevCap: MaxPayload 256 bytes
MaxPayload 128 bytes
DevCap: MaxPayload 256 bytes
MaxPayload 128 bytes
DevCap: MaxPayload 256 bytes
MaxPayload 128 bytes
DevCap: MaxPayload 256 bytes
MaxPayload 128 bytes

All the "DevCap" lines indicate  what the device is capable of. The other ones indicate what was negotiated. 128 bytes cause an efficiency of 81.8% (128/(128+26)*128/130), while 256 bytes would reach 89.4% (256/(256+26)*128/130) and achieve a 9.2% performance increase. It might be worth finding what is causing this limitation.

Closing words

That's all for now. This board is amazing from a hardware perspective. First, it looks extremely clean and well designed. Second, its I/O distribution makes an optimal use of the SoC's capability. The SoC doesn't heat that much and I managed to leave the fan disconnected during operation and this will be its target state anyway. The onboard DC-DC converters show a much higher efficiency than the two other options I tested, which also indicates a choice of great components.

I missed a reset button on the board, and a USB console connector on the back (there's not much room for this at the back, but maybe some combo connectors now exist with an extra USB-C connector that could appear above / below the RJ45 connectors for example, or maybe atop the existing USB-C one). If / when the board adopts a UEFI installer (the SPI NOR still remains empty), then the console will no longer be needed.

One point that I really disliked is the annoying Roobi installer that made everything more complicated than usual. Furthermore, the fact that it confiscates the only storage available to put an operating system seriously needs to be revisited. This is a totally bogus choice. I'm definitely not going to install the OS on a micro-SD, that's the place for an installer. And I'm not going to put the OS on a data disk either. Having had to deal with that painful experience in the past, making it super complicated to exchange data disks when trying to recover data or just for a migration, I seriously don't want to do that again.

Thomas Kaiser showed me that Radxa recently merged a patch in their kernel to disable the eMMC, and only reserve it for the installer. Not only this makes no sense, but it just voids the interest of the product for me if it only leaves me with the option to remove one data disk and cut the I/O performance and storage capacity by 1/3. And by the way, the 16MB SPI NOR is still unused and 16MB is plenty to store an installer, I'm personally stuffing full-featured OS on that on other machines!

Just like with the ROCK 5B, I'll keep this device reserved for testing for as long as it will not have an LTS mainline kernel available (likely by the end of the year). I've ordered new SSDs to run better tests, and I'll have to run many more tests and also to test other distros (notably Slackware ARM64). I'll also evaluate how it deals with HTTP/HTTPS load balancing now that it has a good NIC ;-)