Most ARM mini PCs run Android, while mini PCs based on Intel Atom Z3735F currently all ship with Windows 8.1, so it makes comparison difficult. But since Linuxium posted triple boot instructions (Ubuntu, Android, Windows 10) for MeegoPad T01, he’s also run Antutu 5.6 on the platform, so we’ve got a comparison point. The Android image used in the Intel platform is Android-x86, and may not have been optimized for Bay Trail yet, so even though the comparison may not be perfect, it could still be interesting to find out the strengths and weaknesses of the Intel processor, against one of the fastest ARM processor found in mini PCs: Rockchip RK3288.
I’ll use the Antutu 5.3 score I got with Open Hour Chameleon as a reference point.
Rockchip RK3288 | Intel Atom Z3735F | Delta | |
CPU | Quad core Cortex A17 @ 2.0 GHz | Quad core @ 1.33 GHz (Burst 1.83GHz) | |
GPU | ARM Mali-T764 | Intel HD Graphics Gen 7 | |
Antutu 5.x | |||
Overall | 36525 | 29851 | -18.27% |
Multitask | 5906 | 3947 | -33.17% |
Runtime | 2039 | 2064 | 1.23% |
RAM Ops | 2487 | 2158 | -13.23% |
RAM Speed | 2985 | 3281 | 9.92% |
CPU Integer (multi-thread) | 2414 | 3035 | 25.72% |
CPU float-point (multi-thread) | 3515 | 2984 | -15.11% |
CPU Integer (single thread) | 1455 | 1572 | 8.04% |
CPU float-point (single thread) | 1893 | 1696 | -10.41% |
2D Graphics (1920×1080) | 1447 | 1346 | -6.98% |
3D Graphics (1920×1080) | 11108 | 5904 | -46.85% |
It would have been nice to have data from other benchmarks too, but I could not find results yet. The red results indicate the Intel SoC is slower than the Rockchip one, and the green ones where the Intel SoC outperforms. 3D graphics is where the Intel GPU shows its limitation against the Mali-T764, which according to Antutu is about twice as fast. For some reasons multi-tasking shows a relatively poor result too. Z3735F has significantly better integer performance, but floating point is not as good. But overall performance between the two should not be that noticeable for end users, except possibly for 3D games.
Jean-Luc started CNX Software in 2010 as a part-time endeavor, before quitting his job as a software engineering manager, and starting to write daily news, and reviews full time later in 2011.
Support CNX Software! Donate via cryptocurrencies, become a Patron on Patreon, or purchase goods on Amazon or Aliexpress
Hi,
I think it would be even more nice to read if you add a column (ARM / Intel) and show for example 36525 / 29851 = 1.22 like http://benchmarksgame.alioth.debian.org/
The ***main*** difference:
– Android-x86 is WELL supported and with very smart developers behind (read Intel help)
– Android for chinese Arm SoC’s is in general very BAD supported, with a chaotic development cycle
So, I guess if the Chinese Arm manufacturers don’t change their approach, sooner or later they will be quite out of business…
Intel is not sleeping here!
I think you should compare Rk3288 to Atom Z3560 quad core 1.83Ghz w/ powerful GPU PowerVR G6430 in Nexus player, Asus fonepad ~ 40K Antutu
@JotaMG
You are right. Intel is very well support
so PowerVR G64XX vs ARM Mali-T764 🙂
I would really like to now if Intel do any modification on powervr GPU
As every intel marketing document notice it has build in Intel GPU 🙂
CPU float-point (single thread) 1893 vs 1696
I always hope that intel has best compressor on the market.
As all benchmarks AMD vs intel was on intel side
Some real benchmark(Phoronix Test Suite) will be better
@ I think You (the author) are either Overlooking OR either Ignoring the actual / true characteristics of the Intel BayTrail Atom. + The facts (more than one) that this as any other Benchmark are NOT even close to real indicator of performance, stability OR productivity + The facts (again more than one), that ANY of the popular Benchmarks on Android are 1)paid OR 2)cheating/cheated by Manifactures OR 3)fixed anyway anyhow.
I.
So ARM SoC, no matter Cortex-A7 / 8 / 9, Can’t Process with EVERY thread WHAT the New BayTrail Atoms can do.
Single MINDBLOWING Example for the New BayTrail Atom CPU from Intel:
FullHD 1080p VP8 codec – 20% CPU USAGE, all other done by GPU|HW ACCELERATION + it does NOT affect the overall performance of the system. I don’t know of a 1 (Single) ARM Cortex-A SoC that can encode video and be resposive at least at 67%. Hah Yeah .. perhaps 2.0 QUAD A9/A7 ?!. Who heats up like … and Power Consumption is on the scale trying not to think it’s fat 😐
II.
There is now ODROID-C1 which I want to get, but in Europe it’s 44EUR w/o deliver ( not 40$ + deliver !, a bit overprice !! ). The SoC there is Quad Core ARM Cortex-A5 WITH HW ACC of the GPU + VPU under linux. It’s not the best, but it is the optimum NOW !
It should handle 1080p Video Encoding – it will not, at least not at 20% CPU usage.
It should handle 4 x 720p Video Encoding, 1 on each thread, it will not, at least not at 20% CPU usage.
So … can we get back to where ARM SoC is better than Intel Atom ON the GPU part ?!. +Intel’s CPU is way better for Industrial, Responsible APPS.
I will not trust anything important on ANY Cortex-A (Application Processor)
Period
Please don’t wait for an answer here. I don’t see a reason for discussion, but as people get here they will be so badly misslead about the actual state of Intel’s Atom CPUs and Current NEW Hype ARM SoCs.
Both are cool, but they are miles away from each other. One is Tablet/Smartphone thingy, the other is another thing (not thingy).
@m][sko
I assume you mean by “compressor” an “processor”, not file compression or similar… In any case, why would you “always hope” Intel to have the best chips?
Competition would be a positive thing in any market, especially when Intel has pretty much dominated the x86 CPU market since the Nehalem (one could argue them being on top since the Core 2, but it had pretty horrid downfalls against AMD64 (IMC)).
I sure hope ALL processor architectures would get competitive/equal in all aspects, else it’s nothing but people’s only choice being sponsoring some mammoth monopolistic corporation, that is not a positive thing to have.
@Vinícius Tinti
It seems people get confused with delta values in percentage, I’ll use the ratio in my next comparison.
@JotaMG
I’m sure the firmware is very good, but a fair amount of Android apps are not optimized for x86 just yet. This will certainly improve over time. You can see a study comparing x86 vs ARM native app support on the second image @ http://www.cnx-software.com/2014/07/30/arm-and-qualcomm-release-a-new-guide-about-32-bit-to-64-bit-socs/. This was made by Qualcomm however.
@hoangdinh86
The reason I compared Z3735F to RK3288 is that both processors are very popular and found in many devices, while I’ve yet to see a Z3560 mini PC (apart from Nexus Player).
@anon
I mean mathematical co-processor
x264 encoder is always faster on Intel instead of AMD. I know it is mostly vector instructions sets but ARM has NEON 🙂
@m][sko
SSE4 vs NEON should explain why “software encoding” is faster on x86 compared to ARM for otherwise processors with similar performance. I don’t think Antutu tests SIMD instructions at all, or at least not directly. But proper software implementation should use the VPU when available, then we’ll also see extra low CPU usage on ARM platforms too.
@JotaMG: Samsung (korea), Rockchip (mainland China), Mediatek (Taïwan), both work on opensource support for linux mainline kernel. support for HD3000 gpu for example, by intel devs stoped, before the driver was stable, two years after the GPU was in i7-2600K CPU. Help to debug then come around the globe community. Please stop the white cowboy vs the rest of the globe stance. Most components of Linux system come from around the globe, with lot of chinese/russian/indian/south america/… devs in very fundamentals brics and in all domains (drivers, daemons, interfaces, client applications, etc…).
@Dimitar Tomov: Intel atom for industrial purpose is just a joke, most boards are for cheap devices, personal usage oriented, that just has hardware failure after few month to 1 year of usage in a datacenter. Today most industrial servers companies around the globe works on industrial solutions with ARM SoC inside, due to the difference of efficiency. With intel solution you can’t fill 40U racks with interesting energy price, that is a fundamental parameter today.
ARM industrial market was instead in embedded market until recently. Cortex A5 is a 4 year+ old CPU (2010) and very low power profile, not performance, if you ignore ARM SoC produccts like this, I suppose you also use an old distribution without ARM optimization at all. Opensource VPU driver for ARM only started about 1 year and half ago, closed source one a little longer. As 2nd graphics cnxsoft displayed, ARM specific devs is a growing part of devs, optimizations for ARM SIMD (NEON), only started about 4 years ago (for ffmpeg, later for other libs) and intersting cards for this only appears about 2 years ago, where hardware and intel SIMD started 15 years ago. The gap is mostly filled in a really few time and performances are already better on ARM on several cases. There are still lot of place for optimizations in ARM ecosystem, and as market grows, there are more and more developers atracted.
@m][sko
Personally did do a while pure asm on NEON… After getting angry at the stooopid monkey (me) when things did not operate as I thought, then after playing around with multiple implementations of NEON, I can say that pretty much ALL I encountered/tried sucked monkey’s b*lls, so not touching that stuff again without something major redesign from the ARM, and of course then would need to wait for some chip mfg to actually produce chips with it…
Now back to use generic LLVM for all, not going to do a damn thing with NEON, even Altivec was simpler/faster and much more effective… Tat was back in the day with those excellent G4 chips. 🙂
@anon
I was pretty much happy with speed of NEON (2x faster as I use it only on 2D ),
I use arm neon intrices and it is all working just fine with latest version of GCC.
Major problem was with bad memory alignment
vector instructions expect proper memory alignment
for example if your pointer isn’t aligned to 64bit or 128bit (depend on instruction)
on ARM application simple crash
on intel you will completely lose speed benefit as intel do some magic that completly slow down whole vector instruction.
but it is really nice that gcc finally add some macros to properly mark memory allocation with right alignment.
Is there a decent RK3288-RK3188 benchmark comparison anywhere? ( http://www.cnx-software.com/2014/01/10/rockchip-rk3288-vs-rk3188-performance-comparison/ is a bit sketchy.)
@m][sko
Yeah, nowadays it might be better, i was doing that in GCC 4.7/4.8 era, but of course as handwritten asm, so the biggest problem was between the chair and the keyboard, so after moving things to LLVM 3.5+ I simply gave up on self-made code, and simply compile it.
@onebir
I wrote that because RK3288 was release, with number published by Rockchip.
I have not done a side-by-side comparison of Antutu scores, and other benchmarks.
@toto
But the industrial guys hardly use Rockchip or Mediatek ARM chips where you have no idea whatsoever for how long they will be made. THAT is where Texas Instruments, Renesas and so forth really make their money
@m][sko
Z37XX has Intel, Z35XX is PowerVR. Any intel doc I’ve seen was fairly clear about this.
@cnxsoft
Sure – no criticism intended!
Any idea of the system power draw differences?
gcc’s output always seems to run better on x86 than it does on arm (i.e. x86 seems better at running ‘poorish’ compiler generated code), so these results surprise me. Not sure what that bloke’s problem is with NEON – imho it’s great and makes SSE look like something intel came up with.
Hey there, did you ever tried to do a RK3288 (or RK3368) vs Z8300? I would enjoy seeing the intel evolution.
@Emerson
I have not done a direct comparison between RK3288 and x5-Z8300, but I did compare Z3735F to x5-Z8300 @ http://www.cnx-software.com/2015/08/23/intel-atom-z3735f-vs-atom-x5-z8300-benchmarks-comparison/
The results are not that much different, except for the GPU part.