Related
CPU performance from the new TI OMAP 3640 (yes, they’re wrong again, its 3640 for the 1 GHz SoC, 3630 is the 720 MHz one) is surprisingly good on Quadrant, the benchmarking tool that Taylor is using. In fact, as you can see from the Shadow benchmarks in the first article, it is shown outperforming the Galaxy S, which initially led me to believe that it was running Android 2.2 (which you may know can easily triple CPU performance). However, I’ve been assured that this is not the case, and the 3rd article seems to indicate as such, given that those benchmarks were obtained using a Droid 2 running 2.1.
Now, the OMAP 3600 series is simply a 45 nm version of the 3400 series we see in the original Droid, upclocked accordingly due to the reduced heat and improved efficiency of the smaller feature size.
If you need convincing, see TI’s own documentation: http://focus.ti.com/pdfs/wtbu/omap3_pb_swpt024b.pdf
So essentially the OMAP 3640 is the same CPU as what is contained in the original Droid but clocked up to 1 GHz. Why then is it benchmarking nearly twice as fast clock-for-clock (resulting in a nearly 4x improvement), even when still running 2.1? My guess is that the answer lies in memory bandwidth, and that evidence exists within some of the results from the graphics benchmarks.
We can see from the 3rd article that the Droid 2’s GPU performs almost twice as fast as the one in the original Droid. We know that the GPU in both devices are the same model, a PowerVR SGX 530, except that the Droid 2’s SGX 530 is, as is the rest of the SoC, on the 45 nm feature size. This means that it can be clocked considerably faster. It would be easy to assume that this is reason for the doubled performance, but that’s not necessarily the case. The original Droid’s SGX 530 runs at 110 MHz, substantially less than its standard clock speed of 200 MHz. This downclocking is likely due to the memory bandwidth limitations I discussed in my Hummingbird vs Snapdragon article, where the Droid original was running LPDDR1 memory at a fairly low bandwidth that didn’t allow for the GPU to function at stock speed. If those limitations were removed by adding LPDDR2 memory, the GPU could then be upclocked again (likely to around 200 MHz) to draw even with the new memory bandwidth limit, which is probably just about twice what it was with LPDDR1.
So what does this have to do with CPU performance? Well, it’s possible that the CPU was also being limited by LPDDR1 memory, and that the 65 nm Snapdragons that are also tied down to LPDDR1 memory share the same problem. The faster LPDDR2 memory could allow for much faster performance.
Lastly, since we know from the second article at the top that the Galaxy S performs so well with its GPU, why is it lacking in CPU performance, only barely edging past the 1 GHz Snapdragon?
It could be that the answer lies in the secret that Samsung is using to achieve those ridiculously fast GPU speeds. Even with LPDDR2 memory, I can’t see any way that the GPU could achieve 90 Mtps; the required memory bandwidth is too high. One possibility is the addition of a dedicated high-speed GPU memory cache, allowing the GPU access to memory tailored to handle its high-bandwidth needs. With this solution to memory bandwidth issues, Samsung may have decided that higher speed memory was unnecessary, and stuck with a slower solution that remains limited in the same manner as the current-gen Snapdragon.
Lets recap: TI probably dealt with the limitations to its GPU by dropping in higher speed system RAM, thus boosting overall system bandwidth to nearly double GPU and CPU performance together.
Samsung may have dealt with limitations to the GPU by adding dedicated video memory that boosted GPU performance several times, but leaving CPU performance unaffected.
This, I think, is the best explanation to what I’ve seen so far. It’s very possible that I’m entirely wrong and something else is at play here, but that’s what I’ve got.
Click to expand...
Click to collapse
CPU Performance
Before I go into details on the Cortex-A8, Snapdragon, Hummingbird, and Cortex-A9, I should probably briefly explain how some ARM SoC manufacturers take different paths when developing their own products. ARM is the company that owns licenses for the technology behind all of these SoCs. They offer manufacturers a license to an ARM instruction set that a processor can use, and they also offer a license to a specific CPU architecture.
Most manufacturers will purchase the CPU architecture license, design a SoC around it, and modify it to fit their own needs or goals. T.I. and Samsung are examples of these; the S5PC100 (in the iPhone 3GS) as well as the OMAP3430 (in the Droid) and even the Hummingbird S5PC110 in the Samsung Galaxy S are all SoCs with Cortex-A8 cores that have been tweaked (or “hardened”) for performance gains to be competitive in one way or another. Companies like Qualcomm however will build their own custom processor architecture around a license to an instruction set that they’ve chosen to purchase from ARM. This is what the Snapdragon’s Scorpion processor is, a completely custom implementation that shares some similarities with Cortex-A8 and uses the same ARMv7 instruction set, but breaks away from some of the limitations that the Cortex-A8 may impose.
Qualcomm’s approach is significantly more costly and time consuming, but has the potential to create a processor that outperforms the competition. Through its own custom architecture configuration, (which Qualcomm understandably does not go into much detail regarding), the Scorpion CPU inside the Snapdragon SoC gains an approximate 5% improvement in instructions per clock cycle over an ARM Cortex-A8. Qualcomm appeals to manufacturers as well by integrating features such as GPS and cell network support into the SoC to reduce the need of a cell phone manufacturer having to add additional hardware onto the phone. This allows for a more compact phone design, or room for additional features, which is always an attractive option. Upcoming Snapdragon SoCs such as the QSD8672 will allow for dual-core processors (not supported by Cortex-A8 architecture) to boost processing power as well as providing further ability to scale performance appropriately to meet power needs. Qualcomm claims that we’ll see these chips in the latter half of 2010, and rumor has it that we’ll begin seeing them show up first in Windows Mobile 7 Series phones in the Fall. Before then, we may see a 45 nm version of the QSD8650 dubbed “QSD8650A” released in the Summer, running at 1.3 GHz.
You might think that the Hummingbird doesn’t stand a chance against Qualcomm’s custom-built monster, but Samsung isn’t prepared to throw in the towel. In response to Snapdragon, they hired Intrinsity, a semiconductor company specializing in tweaking processor logic design, to customize the Cortex-A8 in the Hummingbird to perform certain binary functions using significantly less instructions than normal. Samsung estimates that 20% of the Hummingbird’s functions are affected, and of those, on average 25-50% less instructions are needed to complete each task. Overall, the processor can perform tasks 5-10% more quickly while handling the same 2 instructions per clock cycle as an unmodified ARM Cortex-A8 processor, and Samsung states it outperforms all other processors on the market (a statement seemingly aimed at Qualcomm). Many speculate that it’s likely that the S5PC110 CPU in the Hummingbird will be in the iPhone HD, and that its sister chip, the S5PV210, is inside the Apple A4 that powers the iPad. (UPDATE: Indications are that the model # of the SoC in the Apple iPad’s A4 is “S5L8930”, a Samsung part # that is very likely closely related to the S5PV210 and Hummingbird. I report and speculate upon this here.)
Lastly, we really should touch upon Cortex-A9. It is ARM’s next-generation processor architecture that continues to work on top of the tried-and-true ARMv7 instruction set. Cortex-A9 stresses production on the 45 nm scale as well as supporting multiple processing cores for processing power and efficiency. Changes in core architecture also allow a 25% improvement in instructions that can be handled per clock cycle, meaning a 1 GHz Cortex-A9 will perform considerably quicker than a 1 GHz Cortex-A8 (or even Snapdragon) equivalent. Other architecture improvements such as support for out-of-order instruction handling (which, it should be pointed out, the Snapdragon partially supports) will allow the processor to have significant gains in performance per clock cycle by allowing the processor to prioritize calculations based upon the availability of data. T.I. has predicted its Cortex-A9 OMAP4440 to hit the market in late 2010 or early 2011, and promises us that their OMAP4 series will offer dramatic improvements over any Cortex-A8-based designs available today.
GPU performance
There are a couple problems with comparing GPU performance that some recent popular articles have neglected to address. (Yes, that’s you, AndroidAndMe.com, and I won’t even go into a rant about bad data). The drivers running the GPU, the OS platform it’s running on, memory bandwidth limitations as well as the software itself can all play into how well a GPU runs on a device. In short: you could take identical GPUs, place them in different phones, clock them at the same speeds, and see significantly different performance between them.
For example, let’s take a look at the iPhone 3GS. It’s commonly rumored to contain a PowerVR SGX 535, which is capable of processing 28 million triangles per second (Mt/s). There’s a driver file on the phone that contains “SGX535” in the filename, but that shouldn’t be taken as proof as to what it actually contains. In fact, GLBenchmark.com shows the iPhone 3GS putting out approximately 7 Mt/s in its graphics benchmarks. This initially led me to believe that the iPhone 3GS actually contained a PowerVR SGX 520 @ 200 MHz (which incidentally can output 7 Mt/s) or alternatively a PowerVR SGX 530 @ 100 MHz because the SGX 530 has 2 rendering pipelines instead of the 1 in the SGX 520, and tends to perform about twice as well. Now, interestingly enough, Samsung S5PC100 documentation shows the 3D engine as being able to put out 10 Mt/s, which seemed to support my theory that the device does not contain an SGX 535.
However, the GPU model and clock speed aren’t the only limiting factors when it comes to GPU performance. The SGX 535 for example can only put out its 28 Mt/s when used in conjunction with a device that supports the full 4.2 GB per second of memory bandwidth it needs to operate at this speed. Assume that the iPhone 3GS uses single-channel LPDDR1 memory operating at 200 MHz on a 32-bit bus (which is fairly likely). This allows for 1.6 GB/s of memory bandwidth, which is approximately 38% of what the SGX 535 needs to operate at its peak speed. Interestingly enough, 38% of 28 Mt/s equals just over 10 Mt/s… supporting Samsung’s claim (with real-world performance at 7 Mt/s being quite reasonable). While it still isn’t proof that the iPhone 3GS uses an SGX 535, it does demonstrate just how limiting single-channel memory (particularly slower memory like LPDDR1) can be and shows that the GPU in the iPhone 3GS is likely a powerful device that cannot be used to its full potential. The GPU in the Droid likely has the same memory bandwidth issues, and the SGX 530 in the OMAP3430 appears to be down-clocked to stay within those limitations.
But let’s move on to what’s really important; the graphics processing power of the Hummingbird in the Samsung Galaxy S versus the Snapdragon in the EVO 4G. It’s quickly apparent that Samsung is claiming performance approximately 4x greater than the 22 Mt/s the Snapdragon QSD8650’s can manage. It’s been rumored that the Hummingbird contains a PowerVR SGX 540, but at 200 MHz the SGX 540 puts out 28 Mt/s, approximately 1/3 of the 90 Mt/s that Samsung is claiming. Either Samsung has decided to clock an SGX 540 at 600 MHz, which seems rather high given reports that the chip is capable of speeds of “400 MHz+” or they’ve chosen to include a multi-core PowerVR SGX XT solution. Essentially this would allow 3 PowerVR cores (or 2 up-clocked ones) to hit the 90 Mt/s mark without having to push the GPU past 400 MHz.
Unfortunately however, this brings us right back to the memory bandwidth limitation argument again, because while the Hummingbird likely uses LPDDR2 memory, it still only appears to have single-channel memory controller support (capping memory bandwidth off at 4.2 GB/s), and the question is raised as to how the PowerVR GPU obtains the large amount of memory bandwidth it needs to draw and texture polygons at those high speeds. If the PowerVR SGX 540 (which, like the SGX 535 performs at 28 Mt/s at 200 MHz) requires 4.2 GB/s of memory bandwidth, drawing 90 Mt/s would require over 12.6 GB/s of memory bandwidth, 3 times what is available. Samsung may be citing purely theoretical numbers or using another solution such as possibly increasing GPU cache sizes. This would allow for higher peak speeds, but it’s questionable if it could achieve sustainable 90 Mt/s performance.
Qualcomm differentiates itself from most of the competition (once again) by using its own graphics processing solution. The company bought AMD’s Imageon mobile-graphics division in 2008, and used AMD’s Imageon Z430 (now rebranded Adreno 200) to power the graphics in the 65 nm Snapdragons. The 45 nm QSD8650A will include an Adreno 205, which will provide some performance enhancements to 2D graphics processing as well as hardware support for Adobe Flash. It is speculated that the dual-core Snapdragons will utilize the significantly more powerful Imageon Z460 (or Adreno 220), which apparently rivals the graphics processing performance of high-end mobile gaming systems such as the Sony PlayStation Portable. Qualcomm is claiming nearly the same performance (80 Mt/s) as the Samsung Hummingbird in its upcoming 45 nm dual-core QSD8672, and while LPDDR2 support and a dual-channel memory controller are likely, it seems pretty apparent that, like Samsung, something else must be at play for them to achieve those claims.
While Samsung and Qualcomm tend to stay relatively quiet about how they achieve their graphics performance, T.I. has come out and specifically stated that its upcoming OMAP4440 SoC supports both LPDDR2 and a dual-channel memory controller paired with a PowerVR SGX 540 chip to provide “up to 2x” the performance of its OMAP3 line. This is a reasonable claim assuming the SGX 540 is clocked to 400 MHz and requires a bandwidth of 8.5 GB/s which can be achieved using LPDDR2 at 533 MHz in conjunction with the dual-channel controller. This comparatively docile graphics performance may be due to T.I’s rather straightforward approach to the ARM Cortex-A9 configuration.
Power Efficiency
Moving onward, it’s also easily noticeable that the next generation chipsets on the 45 nm scale are going to be a significant improvement in terms of performance and power efficiency. The Hummingbird in the Samsung Galaxy S demonstrates this potential, but unfortunately we still lack the power consumption numbers we really need to understand how well it stacks up against the 65 nm Snapdragon in the EVO 4G. It can be safely assumed that the Galaxy S will have overall better battery life than the EVO 4G given the lower power requirements of the 45 nm chip, the more power-efficient Super AMOLED display, as well as the fact that both phones sport equal-capacity 1500mA batteries. However it should be noted that the upcoming 45 nm dual-core Snapdragon is claimed to be coming with a 30% decrease in power needs, which would allow the 1.5 GHz SoC to run at nearly the same power draw of the current 1 GHz Snapdragon. Cortex-A9 also boasts numerous improvements in efficiency, claiming power consumption numbers nearly half that of the Cortex-A8, as well as the ability to use multiple-core technology to scale processing power in accordance with energy limitations.
While it’s almost universally agreed that power efficiency is a priority for these processors, many criticize the amount of processing power these new chips are bringing to mobile devices, and ask why so much performance is necessary. Whether or not mobile applications actually need this much power is not really the concern however; improved processing and graphics performance with little to no additional increase in energy needs will allow future phones to actually be much more efficient in terms of power. This is because ultimately, power efficiency relies in a big part on the ability of the hardware in the phone to complete a task quickly and return to an idle state where it consumes very little power. This “burst” processing, while consuming fairly high amounts of power for very short periods of time, tends to be more economical than prolonged, slower processing. So as long as ARM chipset manufacturers can continue to crank up the performance while keeping power requirements low, there’s nothing but gains to be had.
Click to expand...
Click to collapse
http://alienbabeltech.com/main/?p=19309
http://alienbabeltech.com/main/?p=17125
its a good read for noobs like me, also read the comments as there is lots of constructive criticism [that actually adds to the information in the article]
Kind of wild to come across people quoting me when I'm just Googling the web for more info.
I'd just like to point out that I was probably wrong on the entire first part about the 3640. I can't post links yet, but Google "Android phones benchmarked; it's official, the Galaxy S is the fastest." for my blog article on why.
And the reason I'm out here poking around for more information is because AnandTech.com (well known for their accurate and detailed articles) just repeatedly described the SoC in the Droid X as a OMAP 3630 instead of the 3640.
EDIT - I've just found a blog on TI's website that calls it a 3630. I guess that's that! I need to find a TI engineer to make friends with for some inside info.
Anyhow, thanks for linking my work!
Make no mistake, OMAP 3xxx series get left in the dust by the Hummingbird.
Also, I wouldn't really say that Samsung hired Intrinsity to make the CPU - they worked together. Intrinsity is owned by Apple, the Hummingbird is the same core as the A4, but with a faster graphics processor - the PowerVR SGX 540.
There was a bug in the Galaxy S unit they tested, which was later confirmed in the authors own comments later on.
I bought an Android Tablet running an ARM11 (v6) processor. Not sure what the GPU is. However, I ran the NeoCore benchmark for the GPU and got an average of about 13.2 FPS. I compared this against Droid Incredible (Qualcomm Snapdragon) and Droid X (TI OMAP 3630). Both of which as you know that are Cortex A8 variance. All are running at 1.0Ghz.
I also ran Softweg's Benchmark on those 3 devices.
For my tablet, I got scores of about 98 for the GPU and 936 for the CPU.
For the Droid Incredible I got NeoCore score of 26 FPS and for the X, 42 FPS. However, they only scored 27 & 30 respectively on Softweg's GPU test.
I would believe the NeoCore score as I am sure the GPU is poor on the tablet. Why would Softweg's Benchmark app show higher scores on my GPU versus those more advanced Android phones?
My CPU score is also higher than the phones which is not possible. Thank you.
interesting , but u know what , maybe its that neocore bench renders the same amount of data all the time , and the other bench resizes it to the size of the screen , so if ur tab has resolution lower than 480*800 thats the reason why
Actually the resolution is 1024x600. I could understand why the NeoCore score is low which is what I would expect. However, I do not know why my CPU & GPU scores under Benchmark would be higher than a better Cortex processor.
the new Dual Core Snapdragon makes Nvidia's Tegra 2 look like a single core CPU!
and it's not even out of development yet, so this review is on pre-release hardware (Mobile Development Platform (MDP)) which means it's not even optimized yet!
this is Massively Impressive!
some highlights
Qualcomm Mobile Development Platform (MDP)
SoC 1.5 GHz 45nm MSM8660
CPU Dual Core Snapdragon
GPU Adreno 220
RAM (?) LPDDR2
NAND 8 GB integrated, microSD slot
Cameras 13 MP Rear Facing with Autofocus and LED Flash, Front Facing (? MP)
Display 3.8" WVGA LCD-TFT with Capacitive Touch
Battery 3.3 Whr removable
OS Android 2.3.2 (Gingerbread)
...............................................................................................
the LG 3D
LG Optimus 3D is also a dual core cpu
Dual-core 1GHz ARM Cortex-A9 proccessor, PowerVR SGX540 GPU, TI OMAP4430 chipset
................................................................................................
the LG 2x
LG Optimus 2X is a Dual core cpu
Dual-core 1GHz ARM Cortex-A9 proccessor, ULP GeForce GPU, Tegra 2 chipset
................................................................................................
the Nexus S
Nexus S is a single core cpu
(single core) 1 GHz ARM Cortex-A8 processor, PowerVR SGX540
................................................................................................
GLBenchmark 2.0 Egypt
38 Qualcomm MDP
31 LG 3D
25 LG 2x
21 Nexus S
GLBenchmark 2.0 Pro
94 Qualcomm MDP
55 LG 3D
51 LG 2x
42 Nexus S
Quake 3 FPS (Frames per second)
80 Qualcomm MDP
50 LG 2x
52 Nexus S
N/A LG 3D
Quadrant / 3D / 2D
2851 / 1026 / 329 Qualcomm MDP
2670 / 1196 / 306 LG 2x
1636 / 588 / 309 Nexus S
N/A LG 3D
NOTE: take the Quadrant scores with a grain of Salt
heres what Anand has to say about it
"What all Quadrant is putting emphasis on with its 2D and 3D subtests is something of a mystery to me. There isn't a whole lot of documentation, but again it's become something of a standard. The 1.5 GHz MSM8660 leads in overall score and the 2D subtest, but trails Tegra 2 in the 3D subtest. If you notice the difference between Hummingbird (SGX540) from 2.1 to 2.3, you can see how Quadrant's strange 3D behavior on Android 2.3 seems to continually negatively impact performance. I saw the same odd missing texture and erratic performance back when I tested the Nexus S as I did on the MDP. Things like this and lack of updates are precisely why we need even better testing tools to effectively gauge performance"
Source: Anandtech.com
http://www.anandtech.com/show/4243/...ormance-1-5-ghz-msm8660-adreno-220-benchmarks
Hope u enjoyed this
Ric H. (a1yet)
PS: don't rule out Nvidia yet their dual core may have gotten blown out of the water BUT
will their quad (four) cores CPU AND 12 core Gpu be better ?
NVIDIA's Project Kal-El: Quad-Core A9s Coming to Smartphones/Tablets This Year
Link:
http://www.anandtech.com/show/4181/...re-a9s-coming-to-smartphonestablets-this-year
a1yet said:
PS: don't rule out Nvidia yet their dual core may have gotten blown out of the water BUT
will their 12 core cpu be better ?
Click to expand...
Click to collapse
If you're one of those benchmark nut-riders, at least take some time to understand what it is that you're reading. It's 12-core GPU, big difference from a 12-core CPU, which doesn't even exist on desktop computers yet (unless you're talking about multisocket server-class mobos), let alone on a mobile phone.
And the second point which 99% of the people who tend to lust at the benchmarks don't have a damn clue about, screen size and resolution. But I'm sure you don't care to know much about it, OP.
I don't see the point of benchmarks if they don't tell the real world stories.
not sure about if the information is accurate, however it will be nice to have competition so there is always better cpu coming out.
GREAT cause the ipad is killing tegra 2 already
I think mobile processors are similar to desktop processors. There's just too much going on to accurately benchmark. My OG Droid with a 1.25Ghz overclock doesn't even come close to touching my HTC Thunderbolt on stock, yet technically it's 250Mhz faster, right? The HTC's updated 1Ghz processor is faster than other 1Ghz processors, yet rated at 1Ghz. I don't see logic in all the hype.
lude219 said:
And the second point which 99% of the people who tend to lust at the benchmarks don't have a damn clue about, screen size and resolution. But I'm sure you don't care to know much about it, OP.
Click to expand...
Click to collapse
WELL my "PS:" was added in hast and I Made a typo. My whole post was about "GRAPHICS" performance so the typo did not impact the heart of my post!
sad day for you
because with your 2 brain cells u obviously have NO CLUE what u are talking about. "Screen SIZE" has no bearing on performance ! none, zero, zip, zilch!
talk to me about screen size next time I'm playing Angry Birds on my 52 inch HDTV!
the only thing that has ANY bearing on performance IS "resolution"
so to explain it in a way that u can understand
the only impact screen size has is it sometimes allows you (Depending on how the manufactures implement it) to have a higher ....
WAIT FOR IT ...........
WAIT FOR IT ...........
"Resolution"
WOW SAD Day for you !
Go bash someones post, who can tolerate your Ignorance! and leave mine alone
Sincerely
Ric H. (a1yet)
ngarcesp said:
GREAT cause the ipad is killing tegra 2 already
Click to expand...
Click to collapse
and the ipad 2's processor is made by samsung
Sent from HTC EVO
a1yet said:
WELL my "PS:" was added in hast and I Made a typo. My whole post was about "GRAPHICS" performance so the typo did not impact the heart of my post!
sad day for you
because with your 2 brain cells u obviously have NO CLUE what u are talking about. "Screen SIZE" has no bearing on performance ! none, zero, zip, zilch!
talk to me about screen size next time I'm playing Angry Birds on my 52 inch HDTV!
the only thing that has ANY bearing on performance IS "resolution"
so to explain it in a way that u can understand
the only impact screen size has is it sometimes allows you (Depending on how the manufactures implement it) to have a higher ....
WAIT FOR IT ...........
WAIT FOR IT ...........
"Resolution"
WOW SAD Day for you !
Go bash someones post, who can tolerate your Ignorance! and leave mine alone
Sincerely
Ric H. (a1yet)
Click to expand...
Click to collapse
I like you pal!That's the spirit!
Forget the haters dude,there are many around!
r916 said:
and the ipad 2's processor is made by samsung
Sent from HTC EVO
Click to expand...
Click to collapse
I don't know about it being made by Samsung,but the CPU(the CPU itself,not the whole chip)is larger than the other CPUs,thus having more space for more transistors.That significantly boosts performance.
Some exciting news, the first real-world benchmark has appeared for an ARM A15 chip, in this case the Samsung Exynos 5250, which has been launched in the latest Chromebook.
Chip Info - dual-core A15 @ 1.7 GHz & Mali T604 GPU.
http://www.samsung.com/global/busin...t/application/detail?productId=7668&iaId=2341
The benchmark is Sunspider, which is not multi-threaded, i.e. does utilise multiple cores, so you can evaluate the actual performance (javascript) of a single-core., now we can see the performance improvement ARM has baked into their latest hardware
Courtesy of Gigacom, Sunspider on the ARM version of Google Chrome that comes installed on the Chromebook = 660ms (Lower is better). Compared to the current King of the Hill ARM A9 device the Galaxy Note 2 (Exynos 4412), which is clocked at 1.6 GHz, it achieves 972 ms accorded to GSM Arena, other sites have similar figures.
http://www.gsmarena.com/samsung_galaxy_note_ii-review-824p5.php
LOWER IS BETTER
Exynos 5250 - A15 @ 1.7 Ghz = 660 ms
Exynos 4412 - A9 @ 1.6 Ghz = 972 ms
The 5250 is clocked 6% higher than the 4412, so if we adjust the results for CPU frequency parity
Exynos 5250 = 660 ms
Exynos 4412 @ 1.7 Ghz = 914 ms
This is not an exhaustive performance test!, but we can see that in this one popular benchmark that ARM A15 is ~30% faster than the A9 architecture when adjusted for clock speed.
To sweeten the deal further A15 SoC will run at a higher clock than A9s, Tegra 4 (T40) is stated to run @ 1.8 GHz with a bump to 2 GHz after a couple of quarters, just like Tegra 3. Samsung has the even mightier 5450, a quad-core variant of the chip in this test, rumored to run @ 2 GHz, combined with much more powerful GPU, and Android's software optimisations 2013 is going to be one hell of year for tech fans:victory:
Source:
http://gigaom.com/mobile/video-hands-on-with-googles-new-249-chromebook/
Nice find. I am also looking for Mali-T604 results. GLbenchmark results will be interesting. 72GFLOPs does sound very good.
EDIT: I think he says 620ms in video. Also, I am sure it will get better as the Chrome OS code is optimized for ARM. This is just first release. Exynos 4 has been optimized to limit. They can't push it any further now, at least not by a big margin.
hot_spare said:
Nice find. I am also looking for Mali-T604 results. GLbenchmark results will be interesting. 72GFLOPs does sound very good.
Click to expand...
Click to collapse
You may have to wait a while, ChromeOS can't run Android apps like GLbenchmark, only webapps. The reason Sunspider is a good test in this case, is that they both use the ARM version of Chrome, which uses the same underlying technology (Webkit & V8 Javascript engine)
Edit, there some unverified benchmarks from ES 2.0 Taiji, but there are v-sync limited to 60 fps, so we don't know how powerful the T-604, from that bench.
http://www.phonearena.com/news/Sams...i-T604-graphics-pops-up-in-benchmarks_id34681
True. I think have to wait for SGS4 for those benchmarks. More interested in browsermark, peacekeeper, google octane numbers. google itself mentioned that sunspider is outdated.
http://sunspider-mod.googlecode.com/svn/data/hosted/sunspider.html
hot_spare said:
EDIT: I think he says 620ms in video. Also, I am sure it will get better as the Chrome OS code is optimized for ARM. This is just first release. Exynos 4 has been optimized to limit. They can't push it any further now, at least not by a big margin.
Click to expand...
Click to collapse
In the video he mentions 620 ms, but in the comments he states 660 ms for Sunspider when asked the question, I chose the 660 ms to be conservative.
Antutu benchmark!
I kept looking, and found something interesting now.
"Supposedly" first Antutu benchmark for Exynos 5250. Now the values show it's running at 1.5GHz. For a dual-core SoC, 14185 score sounds very good.
The most interesting part is the 3D graphics numbers. This is 3x compared to 4412 SoC.
Source: http://www.antutu.com/view.shtml?id=2718
With more optimization, this can be really powerful.
Looks like this chip will also end up in the Nexus 10
Turbotab said:
Looks like this chip will also end up in the Nexus 10
Click to expand...
Click to collapse
That's going to be a monster tablet.
Peacekeeper browser benchmark for Exynos 5250 gets more than 1200:
https://plus.google.com/u/0/+JoeWilcox/posts/8LrBK9CKJG4
Better than any other mobile SoC so far.
This chip rapes every other chip out there, even the s4 pro and apple a6. look here- http://www.androidauthority.com/exynos-5-dual-benchmarks-125134/
prajju123 said:
This chip rapes every other chip out there, even the s4 pro and apple a6. look here- http://www.androidauthority.com/exynos-5-dual-benchmarks-125134/
Click to expand...
Click to collapse
Dude please don't use the word rape, an ugly word. But we must wait for the a GL Benchmark results of the Mali T-604 against the Apple A6 & A6X, I hope it beats them, but it won't be easy Apple used a lot of die space to create them.
Hoping for a Exynos 5450 (5 Quad) by March or April of 2013
Is it the same chip they use in the new Chromebook?
lz2323 said:
Is it the same chip they use in the new Chromebook?
Click to expand...
Click to collapse
Exactly the same, dual-core Exynos 5250 - Mali T-604.
Hello, I'd like to know if there's any difference between snapdragon 600 and 800, without taking the GPU,
To be more clear, I want to know the difference between the CPU Speeds so, my question is.
Let's say I have a Snapdragon 800 running at 2.1 Ghz and I have a Snapdragon 600 Overclocked with a kernel running at 2.1 Ghz, are they gonna be the same? or Snapdragon 800 is gonna be faster even if it's clocked at the same speed as the 600?
You can't compare the snapdragon 800 @ 2.3 Ghz to a first gen i7 920 Intel running at 2.4 Ghz, of course the i7 is a lot faster.
An Snapdragon 800 running at 2.1 Ghz is as fast as a 600 running at 2.1 Ghz?
My english isn't the best and I hardly can explain what I want to know in my native language so, thanks for taking your time to read this thread and sorry about my broken English/Bad explaination.
Snapdragon 800 is not a CPU. Its a SoC. The CPU within the 800 is a 2.3 krait 400 and within the snapdragon 600 is a 1.9 krait 300
If both CPU run at 1.9, they will be the same speed. The architecture is the same only designed for lower output. That is the only difference.
The reason an i7 and krait 400 cannot be compared us because they are completely different.
Now if you could overclock a krait 300 to match 2.3 on krait 400, theoretically its same speeds but of course overheating and stability will probably mean the real world performance will not be as good
-----------------------
Sent via tapatalk.
I do NOT reply to support queries over PM. Please keep support queries to the Q&A section, so that others may benefit
Hi,
Both clocked at 2.26 Ghz (so with a S600 overclocked) the S800 will always be faster, or both at 2.1 Ghz if you want... In short and for raw performance. This is not only the CPU frequency that is important...
http://www.qualcomm.com/snapdragon/processors/800
http://www.qualcomm.com/snapdragon/processors/600
You can search also for Krait 300/400 for the difference, etc...
also don't forget that the GPU is not the same, the S800 GPU (Adreno 330) is a lot better than the S600 (Adreno 320)
rootSU said:
Snapdragon 800 is not a CPU. Its a SoC. The CPU within the 800 is a 2.3 krait 400 and within the snapdragon 600 is a 1.9 krait 300
If both CPU run at 1.9, they will be the same speed. The architecture is the same only designed for lower output. That is the only difference.
The reason an i7 and krait 400 cannot be compared us because they are completely different.
Now if you could overclock a krait 300 to match 2.3 on krait 400, theoretically its same speeds but of course overheating and stability will probably mean the real world performance will not be as good
-----------------------
Sent via tapatalk.
I do NOT reply to support queries over PM. Please keep support queries to the Q&A section, so that others may benefit
Click to expand...
Click to collapse
Yeah, I just wanted to know if the S800 is faster only because it's clocked higher or there's more (besides the GPU)
viking37 said:
Hi,
Both clocked at 2.26 Ghz (so with a S600 overclocked) the S800 will always be faster, or both at 2.1 Ghz if you want... In short and for raw performance. This is not only the CPU frequency that is important...
http://www.qualcomm.com/snapdragon/processors/800
http://www.qualcomm.com/snapdragon/processors/600
You can search also for Krait 300/400 for the difference, etc...
Click to expand...
Click to collapse
I looked at s600/s800 at qualcomm's website but I found they have the same CPU, just the s800 clocked higher, I thought s800 would be faster than the S600 if both run at the same clock due to better architecture
DarknessWarrior said:
also don't forget that the GPU is not the same, the S800 GPU (Adreno 330) is a lot better than the S600 (Adreno 320)
Click to expand...
Click to collapse
Ye, I know the GPU on the S800 is better but I was curious about the CPU
Sooooooo if both run at the same clock speed they're the same? (ignoring the heat)
So, the S800 is faster because it can be clocked higher due to krait400, so it only is faster than S600 at clock speed (ignoring the GPU)
Nice to know, I thought there were more differences besides the clock that made the S800 faster than S600 in CPU wise.
Thanks for the replies
PunkOz said:
I looked at s600/s800 at qualcomm's website but I found they have the same CPU, just the s800 clocked higher, I thought s800 would be faster than the S600 if both run at the same clock due to better architecture
Click to expand...
Click to collapse
Re,
Nope they are not exactly the same, it's not only an history of CPU freq, look closely
PunkOz said:
Yeah, I just wanted to know if the S800 is faster only because it's clocked higher or there's more (besides the GPU)
I looked at s600/s800 at qualcomm's website but I found they have the same CPU, just the s800 clocked higher, I thought s800 would be faster than the S600 if both run at the same clock due to better architecture
Ye, I know the GPU on the S800 is better but I was curious about the CPU
Sooooooo if both run at the same clock speed they're the same? (ignoring the heat)
So, the S800 is faster because it can be clocked higher due to krait400, so it only is faster than S600 at clock speed (ignoring the GPU)
Nice to know, I thought there were more differences besides the clock that made the S800 faster than S600 in CPU wise.
Thanks for the replies
Click to expand...
Click to collapse
Well not just heat. The Krait 300 CPU is designed to be run at 1.9 whereas the krait 400 is designed to be run at 2.3. Running both at 2.3, they obviously run the same amount of cycles, but the quality of the materials / construction and the design will mean that the krait 300 will not be able to maintain that amount of cycles for long, may drop some cycles etc. Theoretically a cycle is a cycle, in practice getting all those cycles to work properly is different
Plus the difference about memory, L2 cache, etc... For all the differences Google should be your friend, after it's too technical
viking37 said:
Plus the difference about memory, L2 cache, etc... For all the differences Google should be your friend, after it's too technical
Click to expand...
Click to collapse
I actually googled and the CPU is about the same, same L2 cache accordign to Qualcomm's website, 28 nm, just the S800 is clocked higher, I always Google before making a thread but I couldn't find an answer to my question or maybe I didn't ask Google properly.
I know the S800 supports USB 3.0, has a faster charging etc etc, I just wanted to know if it would be running as fast as a S600 if they have the same clock speed.
in conclussion, S800 is faster because it runs cooler than S600 so it lets the S800 reach a higher frequency + better materials used on S800 architecture etc makes it run cooler and cooler means more stable under high load + reaching higher clock.
Thanks for the help guys correct me If I'm wrong but I think I got this
Hi,
Qualcomm will not reveal all on their site
The L2 cache is faster than the S600, memory access (Memory controller?) too it's on a bunch of sites... 28mm, right, but one is LP and the other is HPm...
http://www.anandtech.com/show/6568/qualcomm-krait-400-krait-300-snapdragon-800
The thing we need is the internal hardware stuff, source and documentation from Qualcomm, for sure there is another things . Maybe some kernel devs could have good information too?
Maybe if you did not find anything more is that there is nothing else to find...
But if you got it, it's fine and I think that all is said :good: