I love that OrangePi is making good hardware, but after my experience with the OrangePi 5 Max, I won’t be buying more hardware from them again. The device is largely useless due to a lack of software support. This also happened with the MangoPi MQ-Pro. I’ll just stick with RPi. I may not get as much hardware for the money, but the software support is fantastic.
> The device is largely useless due to a lack of software support.
I think everyone considering an SBC should be warned that none of these are going to be supported by upstream in the way a cheap Intel or AMD desktop will be.
Even the Raspberry Pi 5, one of the most well supported of the SBCs, is still getting trickles of mainline support.
The trend of buying SBCs for general purpose compute is declining, thankfully, as more people come to realize that these are not the best options for general purpose computing.
I can run my N100 nuc at 4W wall socket power draw idle. If I keep turbo boost off, it also stays there under normal load up to 6W full power. Then it is also terribly slow. With turbo boost enabled power draw can go to 8-10W on full load.
Not sure how this compares to the OrangePI in terms of performance per watt but it is already pretty far into the area of marginal gains for me at the cost of having to deal with ARM, custom housing, adapters to ensure the wall socket draw to be efficient etc. Having an efficient pico psu power a pi or orange pi is also not cheap.
Not the poster you're replying to, but I run an Acer laptop with an N305 CPU as a Plex server. Idle power draw with the lid closed is 4-5W and I keep the battery capped at 80% charge.
I have an even cheesier competitor, which randomly has a dragon on the lid (it would be a terrible choice for all but the wimpiest casual gaming... but it makes a good Home Assistant HAOS server!)
I was planning to build a NAS from OPi 5 to minimise power consumption, but ended up going for a Zen 3 Ryzen CPU and having zero regrets. The savings are miniscule and would not justify the costs.
Using vendor kernels is standard in embedded development. Upstreaming takes a long time so even among well-supported boards you either have to wait many years for everything to get upstreamed or find a board where the upstreamed kernel supports enough peripherals that you're not missing anything you need.
I think it's a good thing that people are realizing that these SBCs are better used as development tools for people who understand embedded dev instead of as general purpose PCs. For years now you can find comments under every Raspberry Pi or other SBC thread informing everyone that a mini PC is a better idea for general purpose compute unless you really need something an SBC offers, like specific interfaces or low power.
I have always found it perplexing. Why is that required?
Is it the lack of drivers in upstream? Is it something to do with how ARM devices seemingly can't install Linux the same way x86 machines can (something something device tree)?
There also seems to be a plan to add uefi support to u-boot[1]. Many of these kinds of boards have u-boot implementations, so could then boot uefi kernel.
However many of these ARM chips have their own sub-architecture in the Linux source tree, I'm not sure that it's possible today to build a single image with them all built in and choose the subarchitecture at runtime. Theoretically it could be done, of course, but who has the incentive to do that work?
(I seem to remember Linus complaining about this situation to the Arm maintainer, maybe 10-20 years ago)
> At some point SBCs that require a custom linux image will become unacceptable, right?
The flash images contain information used by the bios to configure and bring up the device. It's more than just a filesystem. Just because it's not the standard consoomer "bios menu" you're used to doesn't mean it's wrong. It's just different.
These boards are based off of solutions not generally made available to the public. As a result, they require a small amount of technical knowledge beyond what operating a consumer PC might require.
So, packaging a standard arm linux install into a "custom" image is perfectly fine, to be honest.
This seems to be an overkill for most of my workloads that require an SBC.
I would choose Jetson for anything computationally intensive, as Orange Pi 6 Plus's NPU is not even utilized due to lack of software support.
For other workloads, this one seems a bit too large in terms of formfactor and power consumption, and older RK3588 should still be sufficient
Looks like the SoC (CIX P1) has Cortex-A720/A520 cores which are Armv9.2, nice.
I've still been on the hunt for a cheap Arm board with a Armv8.3+ or Arvm9.0+ SoC for OSDev stuff, but it's hard to find them in hobbyist price range (this board included, $700-900 USD from what I see).
The NVIDIA Jetson Orin Nanos looked good but unfortunately SWD/JTAG is disabled unless you pay for the $2k model...
Unfortunately only available atm for extremely high prices. I'd like to pick some up to create a ceph cluster (with 1x 18tb hdd osd per node in an 8 node cluster with 4+2 erasure coding)
Disappointing on the NPU. I have found it's a point where industry wide improvement is necessary. People talk tokens/sec, model sizes, what formats are supported... But I rarely see an objective accuracy comparison. I repeatedly see that AI models are resilient to errors and reduced precision which is what allows the 1 bit quantization and whatnot.
But at a certain point I guess it just breaks? And they need an objective "I gave these tokens, I got out those tokens". But I guess that would need an objective gold standard ground truth that's maybe hard to come by.
I was also onboard until he got to the NPU downsides. I don't care about use for an LLM, but I would like to see the ability to run smallish ONNX models generated from a classical ML workflow. Not only is a GPU overkill for the tasks I'm considering, but I'm also concerned that unattended GPUs out on the edge will be repurposed for something else (video games, crypto mining, or just straight up ganked)
The even more confounding factor is there are specific builds provided by every vendor of these Cix P1 systems: Radxa, Orange Pi, Minisforum, now MetaComputing... it is painful to try to sort it out, as someone who knows where to look.
I couldn't imagine recommending any of these boards to people who aren't already SBC tinkerers.
just try to find some benchmark top_k, temp, etc parameters for llama.cpp. There's no consistent framing of any of these things. Temp should be effectively 0 so it's atleast deterministic in it's random probabilities.
By default CUDA isn't deterministic because of thread scheduling.
The main difference comes from rounding order of reduction difference.
It does make a small difference. Unless you have an unstable floating point algorithm, but if you have an unstable floating point algorithm on a GPU at low precision you were doomed from the start.
Right. There are countless parameters and seeds and whatnots to tweak. But theoretically if all the inputs are the same the outputs should be within Epsilon of a known good. I wouldn't even mandate temperature or any other parameter be a specific value, just that it's the same. That way you can make sure even the pseudorandom processes are the same, so long as nothing pulls from a hardware rng or something like that. Which seems reasonable for them to do so idk maybe an "insecure rng" mode
I think everyone considering an SBC should be warned that none of these are going to be supported by upstream in the way a cheap Intel or AMD desktop will be.
Even the Raspberry Pi 5, one of the most well supported of the SBCs, is still getting trickles of mainline support.
The trend of buying SBCs for general purpose compute is declining, thankfully, as more people come to realize that these are not the best options for general purpose computing.
Were people actually doing that?
Sometimes easier to acquire, but usually the same price or more expensive.
Not sure how this compares to the OrangePI in terms of performance per watt but it is already pretty far into the area of marginal gains for me at the cost of having to deal with ARM, custom housing, adapters to ensure the wall socket draw to be efficient etc. Having an efficient pico psu power a pi or orange pi is also not cheap.
1: https://www.ecs.com.tw/en/Product/Mini-PC/LIVA_Q2/
It has major overheating issues though, the N100 was never meant to be put on such a tiny PCB.
https://www.armbian.com/boards?vendor=xunlong
Right?
I think it's a good thing that people are realizing that these SBCs are better used as development tools for people who understand embedded dev instead of as general purpose PCs. For years now you can find comments under every Raspberry Pi or other SBC thread informing everyone that a mini PC is a better idea for general purpose compute unless you really need something an SBC offers, like specific interfaces or low power.
Is it the lack of drivers in upstream? Is it something to do with how ARM devices seemingly can't install Linux the same way x86 machines can (something something device tree)?
https://github.com/tianocore/edk2-platforms/tree/master/Plat...
https://github.com/edk2-porting/edk2-rk3588
However many of these ARM chips have their own sub-architecture in the Linux source tree, I'm not sure that it's possible today to build a single image with them all built in and choose the subarchitecture at runtime. Theoretically it could be done, of course, but who has the incentive to do that work?
(I seem to remember Linus complaining about this situation to the Arm maintainer, maybe 10-20 years ago)
[1] https://docs.u-boot.org/en/v2021.04/uefi/uefi.html
The flash images contain information used by the bios to configure and bring up the device. It's more than just a filesystem. Just because it's not the standard consoomer "bios menu" you're used to doesn't mean it's wrong. It's just different.
These boards are based off of solutions not generally made available to the public. As a result, they require a small amount of technical knowledge beyond what operating a consumer PC might require.
So, packaging a standard arm linux install into a "custom" image is perfectly fine, to be honest.
Proprietary and closed? One can hope.
I've still been on the hunt for a cheap Arm board with a Armv8.3+ or Arvm9.0+ SoC for OSDev stuff, but it's hard to find them in hobbyist price range (this board included, $700-900 USD from what I see).
The NVIDIA Jetson Orin Nanos looked good but unfortunately SWD/JTAG is disabled unless you pay for the $2k model...
Can also plug in a power bank. https://us.ugreen.com/collections/power-bank?sort_by=price-d...
The advantage is that if the machine breaks or is upgraded, the dock and pb can be retained. Would also distribute the price.
The dock and pb can also be kept away to lower heat to avoid a fan in the housing, ideally.
Better hardware should end up leading to better software - its main problem right now.
This 10-in-1 dock even has an SSD enclosure for $80 https://us.ugreen.com/products/ugreen-10-in-1-usb-c-hub-ssd (no affiliation) (no drivers required)
I'd have another dock/power/screen combo for traveling and portable use.
But at a certain point I guess it just breaks? And they need an objective "I gave these tokens, I got out those tokens". But I guess that would need an objective gold standard ground truth that's maybe hard to come by.
I couldn't imagine recommending any of these boards to people who aren't already SBC tinkerers.
Is this a thing? I read an article about how due to some implementation detail of GPUs, you don't actually get deterministic outputs even with temp 0.
But I don't understand that, and haven't experimented with it myself.
The main difference comes from rounding order of reduction difference.
It does make a small difference. Unless you have an unstable floating point algorithm, but if you have an unstable floating point algorithm on a GPU at low precision you were doomed from the start.