A few months ago, we wrote that SolidRun was working on ClearFog ITX workstation with an NXP LX2160A 16-core Arm Cortex-A72 processor, support for up to 64GB RAM, and a motherboard following the mini-ITX form factor that would make it an ideal platform as an Arm developer platform.
Since then the company split the project into two parts: the ClearFog CX LX2K mini-ITX board will focus on networking application, while HoneyComb LX2K has had some of the networking stripped to keep the cost in check for developers planning to use the mini-ITX board as an Arm workstation. Both boards use the exact same LX2160A COM Express module.
HoneyComb LX2K specifications:
- COM Module – CEx7 LX2160A COM Express module with NXP LX2160A 16-core Arm Cortex A72 processor @ 2.2 GHz (2.0 GHz for pre-production developer board)
- System Memory – Up to 64GB DDR4 dual-channel memory up to 3200 Mpts via SO-DIMM sockets on COM module (pre-production will work up to 2900 Mpts)
- Storage
- M.2 2240/2280
/22110SSD support - MicroSD slot
- 64GB eMMC flash
- 4 x SATA 3.0 ports
- M.2 2240/2280
- Networking
1x QSFP28 100Gbps cage (100Gbps/4x25Gbps/4x10Gbps)4x2x SFP+ ports (10 GHz each)- 1x Gigabit Ethernet copper (RJ45)
M.2 2230 with SIM card
- USB – 3x USB 3.0, 3x USB 2.0
- Expansion – 1x PCIe x8 Gen 4.0 socket (Note: pre-production board will be limited to PCIe gen 3.0)
- Debugging – MicroUSB for debugging (UART over USB)
- Misc – USB to STM32 for remote management
- Power Supply – ATX standard
- Dimensions – 170 x 170mm (Mini ITX Form Factor) with support for metal enclosure
The pre-production developer board is fitted with NXP LX2160A pre-production silicon which explains some of the limitations. The metal enclosure won’t be available for the pre-production board, and software features will be limited with the lack of SBSA compliance, UEFI, and mainline Linux support. It will support Linux 4.14.x only. You may want to visit the developer resources page for more technical information.
With those details out of the way, you can pre-order the pre-production board right now for $550 without RAM with shipment expected by August. This is mostly suitable for developers, as the software may not be fully ready. The final HoneyComb LX2K Arm workstation board will become available in November 2019 for $750. If you’d like the network board with 100GbE instead, you can pre-order ClearFog LX2K for $980 without RAM, and delivery is scheduled for September 2019.
If you are interested in benchmarks, Jon Nettleton of SolidRun shared results for Openbenchmarking C-Ray, 7-zip, and sbc-bench among others.
Jean-Luc started CNX Software in 2010 as a part-time endeavor, before quitting his job as a software engineering manager, and starting to write daily news, and reviews full time later in 2011.
Support CNX Software! Donate via cryptocurrencies, become a Patron on Patreon, or purchase goods on Amazon or Aliexpress
What a fantastic board! It has everything i need really. I wonder what the TDP is and how this should be cooled?
The TDP is 32Watts for the SOC. Currently we are cooling it with a heatsink and a 40mm pwm fan. It should be possible to use a fully passive solution if preferred.
I’d like to thank SolidRun for repeatedly catering to a market that’s been traditionally neglected or taken advantage of.
For a long time their MacchiatoBin was the sole workstation-class (ie. proper expandability options) board an arm developer could *buy* (emphasis vs ‘access in the cloud’, or ‘be provided on the job’), and more importantly, that board remained the go-to choice even after other vendors tried but largely missed their sane price brackets.
Here’s to hoping HoneyComb takes arm workstations for the masses to new levels!
>been traditionally neglected or taken advantage of. Which market is this? Maybe the market is “neglected” because it doesn’t exist? Highly proprietary workstations died ages ago. >Here’s to hoping HoneyComb takes arm workstations for the masses to new levels! Sorry to beat a dead horse and all but why does it being ARM even matter? Don’t get me wrong this is a very good price for one of these high end ARM SoCs that are generally only available as part of expensive equipment but .. would it be less attractive if it was the same price, performance etc but was… Read more »
What I’m seeing here is general purpose products versus final products. The vast majority of ARM boards are either a final product (NAS, STB, etc) or a specific purpose board (RPi imitation, focus on gaming/media playing, tablet, smartphone etc). Often they count on their expansion connector to place extra products that will almost never exist. With workstation boards, you get a *real* general purpose product. In general you have the SoC, the glue around, the network, PCIe and SATA connectors, a DIMM socket for the RAM, and just like with a PC, you do whatever you want with it by… Read more »
>What I’m seeing here is general purpose products versus final products. I think you’re talking about propriety versus generic. >The vast majority of ARM boards I don’t think that problem is at a board level. You can’t really buy an ARM chip that’s not a highly integrated market specific product. >With workstation boards, you get a *real* general purpose product. mm but this seems much more like an old propriety workstation from Sun if it really is a workstation. Almost everything except the memory is integrated into the SoC and you have one expansion connector. You’ll have a GPU in… Read more »
ARM in the cloud and on the edge is growing market. In general you need devices that are powerful enough to service these workloads, but also power efficient and able to run in extreme temperature conditions. It is just far easier to develop on the same architecture you want to run your application on.
>It is just far easier to develop on the same architecture you want to run your application on.
I don’t see how that is even remotely true. People are deploying code they developed on generic X86 Dells and mac books to millions of devices running all sorts of weird CPUs on a daily basis.
If there weren’t reasonably fast X86 machines out there would be no Android or iOS fuelling ARMs mobile boom because it would been impossible to compile either of those OSes for the targets.
Did you ever develop large apps with cross-compilers? Did you ever debug them? That can be a real pain. There’s no denial it’s infinitely easier to do development natively.
>Did you ever develop large apps with cross-compilers? Yes. FYI most if not all builds of the “large app” known as Android that are in the wild on millions of devices will have been built on an x86 machine because of the memory required.I suspect the situation is the same for most yocto, buildroot etc systems floating around. Pretty much every Android or iOS app you can install will have been built on an x86 machine, debugged in the x86 emulators and cross-debugged between an x86 host and an ARM target. >Did you ever debug them? That can be a… Read more »
Good luck tracking performance on your emulator, especially for iOS when your “emulator” is running an app compiled to native x86. And good luck tracking any MT issue on any of those emulators.
Firmware and boot obviously are a different beast for reason I won’t list as I don’t want to insult your intelligence.
>Good luck tracking performance on your emulator, >especially for iOS when your “emulator” is running an app compiled to native x86. The profilers for android and ios are actually better with the emulators (FYI Android uses KVM now so it’s the same deal) because the environment is more predictable. Many commercial Android devices have all sorts of crap going on and hacks to the OS that mean something you see on one device might not be true for the majority of devices in the field. There are usually lots of weird crashes and other issues that only ever happen on… Read more »
Many configure scripts are probing the host machine instead of hypothetical target device. That makes it easier.
So the solution to a broken build system is not to fix the build or replace it with something that works well with cross compiling (i.e. meson) but to buy an expensive machine to do the compile on the target itself? This sounds like a good excuse to get a higher up to approve your expense request for a board you want to mess with.
dgp is correct, I also do cross development work all of the time without any issues. Some recommendations….. If possible boot your device off from the network, or use a network share to load your apps. That avoids the need to flash over and over again. Learn how to run gdb remotely. If you are working with low level stuff, learn how to use a JTAG with gdb. JTAGs are dirt cheap, you can buy for $30. JTAG can be more effective than working directly on the hardware since they minimally disturb the system. For Android adb over USB is… Read more »
@Jon I wouldn’t say the process is painless. For microcontroller stuff just working out how to get OpenOCD to stop the target running properly can be a major pain point.. For stuff running on Linux though I usually forget what the actual machine is either way and it’s much more convenient to do the development work on my desktop machine with tons of RAM, a decent GPU, youtube on demand etc and push it onto the target when I want to test it than trying to keep multiple machines running with all of the tools and other junk. With Android… Read more »
At work our ALOHA load balancers are entirely cross-compiled as well and it’s not a pain at all. It provides a nice set of benefits such as never depending on the build environment at all (which is why we switched to cross-compile even for x86->x86). The time lost dealing with cross-compilation issues is largely offset by the time saved not having to debug artefact caused by stuff that is not part of your product! Regarding emulators, I don’t like them either, mainly for the same reasons : when you spend one week figuring that the bug you’re looking at can… Read more »
> dgp is correct, I also do cross development work all of the time without any issues. Some recommendations….. I do what is effectively cross-development during most of my workday (one often does cross-platform work in a huge multi-platform codebase, as one cannot code all code across all platforms natively — that’s not physically possible, but one often debugs and profiles on any of the multiple targets), but dgp is very, very far from correct, and it has nothing to do with if one does cross development or not, or how they ‘feel about it’, etc. dgp fails to acknowledge… Read more »
> Which market is this? Maybe the market is “neglected” because it doesn’t exist? Highly proprietary workstations died ages ago. Who said anything about ‘highly proprietary workstations’? A dev workstation nowadays is something allowing the unrestricted development of most kinds of software. It takes certain levels of genericity and expandability, as willy mentioned. > Sorry to beat a dead horse and all but why does it being ARM even matter? Because you’re developing a massive piece of sw for ARM, and not for something else. If you were developing that for SPARC then perhaps an ARM workstation might not be… Read more »
>It takes certain levels of genericity and expandability, as willy mentioned. This machine has a single pci-e slot and a semi-proprietary mezzanine connector. Unless you are happy with what it already has on board or have a lot of expansions that are available/usable over USB then this has less expansion than a 90s Amiga. >Because you’re developing a massive piece of sw for ARM, and not for something else. In this day and age why would you develop specifically for ARM unless you are targeting something highly specific? >And that’s better than developing on the target ISA (where viable, as… Read more »
> Why would I profile it? It’s not performance critical. Reliability is more important.
Yes performance doesn’t matter. You’re kidding right?
And as I wrote above, good luck tracking reliability issues with cross-dev, in particular when MT is involved.
Your u-boot example is stupid because it has the same issues for x86.
>Yes performance doesn’t matter. You’re kidding right? Not always. A lot of modern computing is about something that is maybe isn’t the fastest solution on earth but fills some other requirements. For example python is slow and eats tons of memory for basic data structures but almost anyone can pick it up. In this specific case size is actually more important than performance as there is limited space for it. >And as I wrote above, good luck tracking reliability issues with cross-dev, >in particular when MT is involved. Thousands, millions, of developers around the world are doing cross dev. Most… Read more »
> This machine has a single pci-e slot and a semi-proprietary mezzanine connector. Unless you are happy with what it already has on board or have a lot of expansions that are available/usable over USB then this has less expansion than a 90s Amiga. Let’s see: * workstation levels of compute — check * workstation levels of ram — check. * workstation levels of network connectivity — check * a PCIe slot (yes, I could use another one for, you’d never guess, a second GPU, but one GPU of choice is still ok) — check Perfectly happy. >In this day… Read more »
>Why would it need to be specifically for ARM? Notice the absence >of ‘specifically’ from my original statement You made the point about this being ARM and that being great and I asked why it has be ARM and not say ARC, RISCV,.. >Because you’re developing a massive piece of sw for ARM, and not for something else. That sounds like a real edge case and I suspect is why you think this target demographic is so badly treated or unsupported. (psst: because the target demographic is 5 people working on apps and guys working on networking equipment) >psst: amd64… Read more »
> That sounds like a real edge case and I suspect is why you think this target demographic is so badly treated or unsupported. (psst: because the target demographic is 5 people working on apps and guys working on networking equipment) Clearly SolidRun selling 5 macchiatoBins must have made them release a larger, more serious product. (psst: I didn’t say the market was large, I said it was neglected by prospective vendors, or mistreated by margin chasers. You brought up dells and macbooks, for some obscure reason). > Exactly my point. The point is you needed some other machine to… Read more »
>How about those cases where it’s *perfectly possible* and *beneficial* What are they? Which situations does *your editor of choice* run better on ARM than anything else and in which situations does your toolchain produce different outputs depending on the machine it runs on for a specific target? >No, oh, well, an non-garden-variety (ARM) workstation for me then. There we go. This machine is good for *you* and a tiny amount of other people for mostly religious reasons hence almost no one makes them and if they do they are prohibitively expensive. This is something akin needing a Coldfire based… Read more »
> Which situations does *your editor of choice* run better on ARM than anything else and in which situations does your toolchain produce different outputs depending on the machine it runs on for a specific target? Guys, you’re arguing because one thinks a gray thing is white while the other sees it black! While I prefer cross-compiling and consider it the most reliable way to achieve long-term software maintenance, I found myself many times running gcc on my NanoPis or mcbin when doing some research work or trying to optimize a small piece of code, just because it’s way easier… Read more »
> (psst: I mentioned u-boot as how the hell do you debug something on the target if the target hasn’t even got DRAM initialisation done. I thought you would have been able to work out what was going on there. This is a site about embedded stuff and I thought you would have noticed). You use JTAG and the SRAM on the chip. That why the chip has SRAM. The SPL part of u-boot loads into that SRAM. You can do almost anything with JTAG. I once added code to OpenOCD to wiggle the boundary pins on the CPU to… Read more »
>A dev workstation nowadays is something allowing the unrestricted development
>of most kinds of software. It takes certain levels of genericity and expandability,
>as willy mentioned.
And FYI you just described any generic x86 box I can go and buy from the local PC shop.
I would like to clarify. This is using pre-Mass Production silicon. This was an agreement that we came to with NXP to get the boards into developers hands as early as possible. The only main differences between this SOC and the production will be the support for 100Gbps networking, this version is limited to 25Gbps max. Also note that it was decided that this revision of the chip, will only support PCIe Version 3. There will be a new variant in 2020 that will support PCIe Version 4
What a great NAS / Application Server we can build with it! 🙂
a lot of DDR4 RAM, a 10/25GBe SFP+ module, and a SAS PCIe x8 controller …. WOW! 🙂
The HoneyComb board will be limited to dual 10Gbe SFP+ interfaces. The ClearFog CX LX2K will include the QSFP28 cage that will allow for multiple 25Gbe connections, or a single 100Gbe.
Yes it could but IMHO… you can also go TODAY with cheaper and more practical solutions with AMD V1000 series based mini-itx boards, the v1605b runs 12 to 25W after all. Google these: IBase MI988 ASRock Industrial IMB-V1000 Advantech DPX-E140 DFI GH171 Quanmax MITX-V1K0 First, let’s clarify the “these are industrial boards, end users can’t them!”. Mmm… not entirely accurate. Each of these sites usally has a button “Get Quote”. Just click it and use a corporate email. That’s about it. I did. I went with a DFI GH171(*). The nvme is x4 wide. The PCI slot is x16 electrical.… Read more »
Out of curiosity: what’s the use case for the InfiniBand setup you propose?
As for your RockPro64 not working with the Mellanox card maybe doing a web search for ‘rk3399 pci aperture bar’ might help.
1) File servers (Samba, NFS)
2) Distributed file servers (GlusterFS, Lustre)
3) Distributed DB (Cassandra)
4) Distributed Apps (MPI)
5) Anything networked which can be optimized by adding the RDMA interface.
6) Anything networked which cannot be optimized with RDMA but will profit from IPoIB.
For the RockPro64 TYVM for the suggestion. I’ll look into it. The card does not show in lspci and dmesg shows some cryptic error.
Forgot to mention:
7) Remote VM
8) iSCSI
The use cases you mentioned are more or less ‘network/storage in general’ and stuff I’m rather familiar with. But I wonder whether client access isn’t also an issue? Do all your clients get also equipped with Mellanox cards or do you utilize an Infiniband to Ethernet bridge or something like that?
For this project I don’t have clients. It’s a home project, I’m the only “client”. Answer: yes you can do both. My original need was much faster network to move my VMware images (40GB for example). But then I learned that IB can do many more things (i.e. for distributed apps.)
The major point is the difference between the IP stack and RDMA. With the IP stack each time a request/response is performed the “client” and the “server” both go through well… the IP stack. This means multiple copies of the data in buffers and the execution is performed by the CPU. With RDMA the data is copied directly from the client app memory to the server app memory and this is performed by the processor on the NIC.
This being said, you can use RDMA to carry IP packets over InfiniBand, that’s IPoIB. Conversely you can use Ethernet to carry RDMA packets, that’s RoCE (RDMA over Converged Ethernet). This allows data centers or corporate networks all equipped in Ethernet to use RDMA without having to lay down new cables.
Hey, very good comment i read with joy. I’ve also been following the EPYC embedded closely and have some good experience with ConnectX-3 with SR-IOV/RDMA/TCP, it’s really fun technology.
How much did you pay for that DFI DFI GH171 board?
Thank you. $479 for the v1605b version (http://www.nextwarehouse.com/item/?2949068). I’m waiting for the other pieces (memory, nvme). The power connector is an “antique” P4 (4-pin square block for Pentium mobos). Fortunately I have “antique” PSUs. For new ones, you also need the molex/P4 adapter. I should be able to try it over the week-end. So you’re probably more experienced on ConnectX-3, I just started two months ago. Fun technology indeed!… and less expensive than brand new 10Ge hardware 🙂
I got the memory this afternoon. I plugged in the 12v P4 from the ATX PSU and shorted pins 4-5 on the PSU ATX power plug to start it. The Mellanox shows up in lspci and -vv states LnkCap 8GT and 8x. More interestingly the LnkSta is also 8GT and 8x. So I’m in business 🙂 Back to work now.
<> Strike that, new PSUes have CPU 8-pin that can be separated in 2 x 4-pin. Quick IPoIB test using iperf3, from 27.7 to 29.6 Gbe with one core at 100%. So it is a very acceptable result for an embedded solution.
This is just awesome. I hope it’s the first of many comparable boards. I hope it’ll do well.
workstation without GPU graphics card no video output? This soc is not for workstation…
You can us a pcie based graphics card that has OSS support. We are targeting the RX 5XX line for initial verification.
The SoC is a networking SoC so it doesn’t have a GPU. Which in the ARM ecosystem is probably a good thing because you wouldn’t want your workstation stuck on Linux 3.18 until the end of time.
It would have been nice if there was one less SFP connector and in it’s place another pci-e slot so you could have a GPU and something else like an a beefy FPGA on a pci-e card. That would be a good match up with the sort of situations you’d use something like this in.
This is something we are looking at for the production release of the ClearFog CX LX2K. The QSFP28 port could also be assigned as an external PCIe connector. Our idea was it would be easier for a developer to have their workstation under the desk and then an external PCIe cage that can sit on their desktop for easier access, prototyping etc. We are still not sure it will be possible with the current IP but investigating for that release. One of the reasons we split the branding is that HoneyComb will be our workstation lineup so it will give… Read more »
> ClearFog CX LX2K. The QSFP28 port could also be assigned as an external PCIe connector
That’s interesting. You write about ’18 x PCIe Gen 4 (5 controllers)’ on the developer page. So something like x8, x4, x4, x1, x1 is a possible setup? What’s the maximum width?
BTW: On https://developer.solid-run.com/products/cex7-lx2160a/ you mention ‘Up to 16GB DDR4’ on the overview tab but ‘Up to 64GB DDR4’ under specifications. I guess one is wrong?
Clarification. NXP has redefined the product line and only PCIe Gen 3 will every be supported. As for the PCIe options that is a bit more complex. NXP currently only supports specific configurations of PCIe, SERDES and USB so you need to pick and choose what you want to expose. This is something we are working with them on to hopefully be more flexible, especially because we have the SOC on a COM so we want the carriers to be as flexible as possible. The maximum width is x8. Yes the 16GB is incorrect. It is 64GB and we have… Read more »
and fixed the incorrect memory specs. thanks for the heads up.
Why is tkaiser not giving his ordeal?
finally what every one here has been waiting for
def my next board
Can SBSA/UEFI and mainline Linux support be added to the pre-production board later, or is it going to be stuck with Linux 4.14 branch forever?
I have asked marketing and sales to update that copy. Those versions are target shipping software, which is actually bumped on the developer board as NXP has just released a 4.19 based BSP. These boards will be fully supported by all mainline support that progresses.
This was my main concern, but if support can be added later sign me up!
Does anyone know if this board supports virtualization?