A good OMAP 3640 vs snapdragon vs humming bird article - General Topics

CPU performance from the new TI OMAP 3640 (yes, they’re wrong again, its 3640 for the 1 GHz SoC, 3630 is the 720 MHz one) is surprisingly good on Quadrant, the benchmarking tool that Taylor is using. In fact, as you can see from the Shadow benchmarks in the first article, it is shown outperforming the Galaxy S, which initially led me to believe that it was running Android 2.2 (which you may know can easily triple CPU performance). However, I’ve been assured that this is not the case, and the 3rd article seems to indicate as such, given that those benchmarks were obtained using a Droid 2 running 2.1.
Now, the OMAP 3600 series is simply a 45 nm version of the 3400 series we see in the original Droid, upclocked accordingly due to the reduced heat and improved efficiency of the smaller feature size.
If you need convincing, see TI’s own documentation: http://focus.ti.com/pdfs/wtbu/omap3_pb_swpt024b.pdf
So essentially the OMAP 3640 is the same CPU as what is contained in the original Droid but clocked up to 1 GHz. Why then is it benchmarking nearly twice as fast clock-for-clock (resulting in a nearly 4x improvement), even when still running 2.1? My guess is that the answer lies in memory bandwidth, and that evidence exists within some of the results from the graphics benchmarks.
We can see from the 3rd article that the Droid 2’s GPU performs almost twice as fast as the one in the original Droid. We know that the GPU in both devices are the same model, a PowerVR SGX 530, except that the Droid 2’s SGX 530 is, as is the rest of the SoC, on the 45 nm feature size. This means that it can be clocked considerably faster. It would be easy to assume that this is reason for the doubled performance, but that’s not necessarily the case. The original Droid’s SGX 530 runs at 110 MHz, substantially less than its standard clock speed of 200 MHz. This downclocking is likely due to the memory bandwidth limitations I discussed in my Hummingbird vs Snapdragon article, where the Droid original was running LPDDR1 memory at a fairly low bandwidth that didn’t allow for the GPU to function at stock speed. If those limitations were removed by adding LPDDR2 memory, the GPU could then be upclocked again (likely to around 200 MHz) to draw even with the new memory bandwidth limit, which is probably just about twice what it was with LPDDR1.
So what does this have to do with CPU performance? Well, it’s possible that the CPU was also being limited by LPDDR1 memory, and that the 65 nm Snapdragons that are also tied down to LPDDR1 memory share the same problem. The faster LPDDR2 memory could allow for much faster performance.
Lastly, since we know from the second article at the top that the Galaxy S performs so well with its GPU, why is it lacking in CPU performance, only barely edging past the 1 GHz Snapdragon?
It could be that the answer lies in the secret that Samsung is using to achieve those ridiculously fast GPU speeds. Even with LPDDR2 memory, I can’t see any way that the GPU could achieve 90 Mtps; the required memory bandwidth is too high. One possibility is the addition of a dedicated high-speed GPU memory cache, allowing the GPU access to memory tailored to handle its high-bandwidth needs. With this solution to memory bandwidth issues, Samsung may have decided that higher speed memory was unnecessary, and stuck with a slower solution that remains limited in the same manner as the current-gen Snapdragon.
Lets recap: TI probably dealt with the limitations to its GPU by dropping in higher speed system RAM, thus boosting overall system bandwidth to nearly double GPU and CPU performance together.
Samsung may have dealt with limitations to the GPU by adding dedicated video memory that boosted GPU performance several times, but leaving CPU performance unaffected.
This, I think, is the best explanation to what I’ve seen so far. It’s very possible that I’m entirely wrong and something else is at play here, but that’s what I’ve got.
Click to expand...
Click to collapse
CPU Performance
Before I go into details on the Cortex-A8, Snapdragon, Hummingbird, and Cortex-A9, I should probably briefly explain how some ARM SoC manufacturers take different paths when developing their own products. ARM is the company that owns licenses for the technology behind all of these SoCs. They offer manufacturers a license to an ARM instruction set that a processor can use, and they also offer a license to a specific CPU architecture.
Most manufacturers will purchase the CPU architecture license, design a SoC around it, and modify it to fit their own needs or goals. T.I. and Samsung are examples of these; the S5PC100 (in the iPhone 3GS) as well as the OMAP3430 (in the Droid) and even the Hummingbird S5PC110 in the Samsung Galaxy S are all SoCs with Cortex-A8 cores that have been tweaked (or “hardened”) for performance gains to be competitive in one way or another. Companies like Qualcomm however will build their own custom processor architecture around a license to an instruction set that they’ve chosen to purchase from ARM. This is what the Snapdragon’s Scorpion processor is, a completely custom implementation that shares some similarities with Cortex-A8 and uses the same ARMv7 instruction set, but breaks away from some of the limitations that the Cortex-A8 may impose.
Qualcomm’s approach is significantly more costly and time consuming, but has the potential to create a processor that outperforms the competition. Through its own custom architecture configuration, (which Qualcomm understandably does not go into much detail regarding), the Scorpion CPU inside the Snapdragon SoC gains an approximate 5% improvement in instructions per clock cycle over an ARM Cortex-A8. Qualcomm appeals to manufacturers as well by integrating features such as GPS and cell network support into the SoC to reduce the need of a cell phone manufacturer having to add additional hardware onto the phone. This allows for a more compact phone design, or room for additional features, which is always an attractive option. Upcoming Snapdragon SoCs such as the QSD8672 will allow for dual-core processors (not supported by Cortex-A8 architecture) to boost processing power as well as providing further ability to scale performance appropriately to meet power needs. Qualcomm claims that we’ll see these chips in the latter half of 2010, and rumor has it that we’ll begin seeing them show up first in Windows Mobile 7 Series phones in the Fall. Before then, we may see a 45 nm version of the QSD8650 dubbed “QSD8650A” released in the Summer, running at 1.3 GHz.
You might think that the Hummingbird doesn’t stand a chance against Qualcomm’s custom-built monster, but Samsung isn’t prepared to throw in the towel. In response to Snapdragon, they hired Intrinsity, a semiconductor company specializing in tweaking processor logic design, to customize the Cortex-A8 in the Hummingbird to perform certain binary functions using significantly less instructions than normal. Samsung estimates that 20% of the Hummingbird’s functions are affected, and of those, on average 25-50% less instructions are needed to complete each task. Overall, the processor can perform tasks 5-10% more quickly while handling the same 2 instructions per clock cycle as an unmodified ARM Cortex-A8 processor, and Samsung states it outperforms all other processors on the market (a statement seemingly aimed at Qualcomm). Many speculate that it’s likely that the S5PC110 CPU in the Hummingbird will be in the iPhone HD, and that its sister chip, the S5PV210, is inside the Apple A4 that powers the iPad. (UPDATE: Indications are that the model # of the SoC in the Apple iPad’s A4 is “S5L8930”, a Samsung part # that is very likely closely related to the S5PV210 and Hummingbird. I report and speculate upon this here.)
Lastly, we really should touch upon Cortex-A9. It is ARM’s next-generation processor architecture that continues to work on top of the tried-and-true ARMv7 instruction set. Cortex-A9 stresses production on the 45 nm scale as well as supporting multiple processing cores for processing power and efficiency. Changes in core architecture also allow a 25% improvement in instructions that can be handled per clock cycle, meaning a 1 GHz Cortex-A9 will perform considerably quicker than a 1 GHz Cortex-A8 (or even Snapdragon) equivalent. Other architecture improvements such as support for out-of-order instruction handling (which, it should be pointed out, the Snapdragon partially supports) will allow the processor to have significant gains in performance per clock cycle by allowing the processor to prioritize calculations based upon the availability of data. T.I. has predicted its Cortex-A9 OMAP4440 to hit the market in late 2010 or early 2011, and promises us that their OMAP4 series will offer dramatic improvements over any Cortex-A8-based designs available today.
GPU performance
There are a couple problems with comparing GPU performance that some recent popular articles have neglected to address. (Yes, that’s you, AndroidAndMe.com, and I won’t even go into a rant about bad data). The drivers running the GPU, the OS platform it’s running on, memory bandwidth limitations as well as the software itself can all play into how well a GPU runs on a device. In short: you could take identical GPUs, place them in different phones, clock them at the same speeds, and see significantly different performance between them.
For example, let’s take a look at the iPhone 3GS. It’s commonly rumored to contain a PowerVR SGX 535, which is capable of processing 28 million triangles per second (Mt/s). There’s a driver file on the phone that contains “SGX535” in the filename, but that shouldn’t be taken as proof as to what it actually contains. In fact, GLBenchmark.com shows the iPhone 3GS putting out approximately 7 Mt/s in its graphics benchmarks. This initially led me to believe that the iPhone 3GS actually contained a PowerVR SGX 520 @ 200 MHz (which incidentally can output 7 Mt/s) or alternatively a PowerVR SGX 530 @ 100 MHz because the SGX 530 has 2 rendering pipelines instead of the 1 in the SGX 520, and tends to perform about twice as well. Now, interestingly enough, Samsung S5PC100 documentation shows the 3D engine as being able to put out 10 Mt/s, which seemed to support my theory that the device does not contain an SGX 535.
However, the GPU model and clock speed aren’t the only limiting factors when it comes to GPU performance. The SGX 535 for example can only put out its 28 Mt/s when used in conjunction with a device that supports the full 4.2 GB per second of memory bandwidth it needs to operate at this speed. Assume that the iPhone 3GS uses single-channel LPDDR1 memory operating at 200 MHz on a 32-bit bus (which is fairly likely). This allows for 1.6 GB/s of memory bandwidth, which is approximately 38% of what the SGX 535 needs to operate at its peak speed. Interestingly enough, 38% of 28 Mt/s equals just over 10 Mt/s… supporting Samsung’s claim (with real-world performance at 7 Mt/s being quite reasonable). While it still isn’t proof that the iPhone 3GS uses an SGX 535, it does demonstrate just how limiting single-channel memory (particularly slower memory like LPDDR1) can be and shows that the GPU in the iPhone 3GS is likely a powerful device that cannot be used to its full potential. The GPU in the Droid likely has the same memory bandwidth issues, and the SGX 530 in the OMAP3430 appears to be down-clocked to stay within those limitations.
But let’s move on to what’s really important; the graphics processing power of the Hummingbird in the Samsung Galaxy S versus the Snapdragon in the EVO 4G. It’s quickly apparent that Samsung is claiming performance approximately 4x greater than the 22 Mt/s the Snapdragon QSD8650’s can manage. It’s been rumored that the Hummingbird contains a PowerVR SGX 540, but at 200 MHz the SGX 540 puts out 28 Mt/s, approximately 1/3 of the 90 Mt/s that Samsung is claiming. Either Samsung has decided to clock an SGX 540 at 600 MHz, which seems rather high given reports that the chip is capable of speeds of “400 MHz+” or they’ve chosen to include a multi-core PowerVR SGX XT solution. Essentially this would allow 3 PowerVR cores (or 2 up-clocked ones) to hit the 90 Mt/s mark without having to push the GPU past 400 MHz.
Unfortunately however, this brings us right back to the memory bandwidth limitation argument again, because while the Hummingbird likely uses LPDDR2 memory, it still only appears to have single-channel memory controller support (capping memory bandwidth off at 4.2 GB/s), and the question is raised as to how the PowerVR GPU obtains the large amount of memory bandwidth it needs to draw and texture polygons at those high speeds. If the PowerVR SGX 540 (which, like the SGX 535 performs at 28 Mt/s at 200 MHz) requires 4.2 GB/s of memory bandwidth, drawing 90 Mt/s would require over 12.6 GB/s of memory bandwidth, 3 times what is available. Samsung may be citing purely theoretical numbers or using another solution such as possibly increasing GPU cache sizes. This would allow for higher peak speeds, but it’s questionable if it could achieve sustainable 90 Mt/s performance.
Qualcomm differentiates itself from most of the competition (once again) by using its own graphics processing solution. The company bought AMD’s Imageon mobile-graphics division in 2008, and used AMD’s Imageon Z430 (now rebranded Adreno 200) to power the graphics in the 65 nm Snapdragons. The 45 nm QSD8650A will include an Adreno 205, which will provide some performance enhancements to 2D graphics processing as well as hardware support for Adobe Flash. It is speculated that the dual-core Snapdragons will utilize the significantly more powerful Imageon Z460 (or Adreno 220), which apparently rivals the graphics processing performance of high-end mobile gaming systems such as the Sony PlayStation Portable. Qualcomm is claiming nearly the same performance (80 Mt/s) as the Samsung Hummingbird in its upcoming 45 nm dual-core QSD8672, and while LPDDR2 support and a dual-channel memory controller are likely, it seems pretty apparent that, like Samsung, something else must be at play for them to achieve those claims.
While Samsung and Qualcomm tend to stay relatively quiet about how they achieve their graphics performance, T.I. has come out and specifically stated that its upcoming OMAP4440 SoC supports both LPDDR2 and a dual-channel memory controller paired with a PowerVR SGX 540 chip to provide “up to 2x” the performance of its OMAP3 line. This is a reasonable claim assuming the SGX 540 is clocked to 400 MHz and requires a bandwidth of 8.5 GB/s which can be achieved using LPDDR2 at 533 MHz in conjunction with the dual-channel controller. This comparatively docile graphics performance may be due to T.I’s rather straightforward approach to the ARM Cortex-A9 configuration.
Power Efficiency
Moving onward, it’s also easily noticeable that the next generation chipsets on the 45 nm scale are going to be a significant improvement in terms of performance and power efficiency. The Hummingbird in the Samsung Galaxy S demonstrates this potential, but unfortunately we still lack the power consumption numbers we really need to understand how well it stacks up against the 65 nm Snapdragon in the EVO 4G. It can be safely assumed that the Galaxy S will have overall better battery life than the EVO 4G given the lower power requirements of the 45 nm chip, the more power-efficient Super AMOLED display, as well as the fact that both phones sport equal-capacity 1500mA batteries. However it should be noted that the upcoming 45 nm dual-core Snapdragon is claimed to be coming with a 30% decrease in power needs, which would allow the 1.5 GHz SoC to run at nearly the same power draw of the current 1 GHz Snapdragon. Cortex-A9 also boasts numerous improvements in efficiency, claiming power consumption numbers nearly half that of the Cortex-A8, as well as the ability to use multiple-core technology to scale processing power in accordance with energy limitations.
While it’s almost universally agreed that power efficiency is a priority for these processors, many criticize the amount of processing power these new chips are bringing to mobile devices, and ask why so much performance is necessary. Whether or not mobile applications actually need this much power is not really the concern however; improved processing and graphics performance with little to no additional increase in energy needs will allow future phones to actually be much more efficient in terms of power. This is because ultimately, power efficiency relies in a big part on the ability of the hardware in the phone to complete a task quickly and return to an idle state where it consumes very little power. This “burst” processing, while consuming fairly high amounts of power for very short periods of time, tends to be more economical than prolonged, slower processing. So as long as ARM chipset manufacturers can continue to crank up the performance while keeping power requirements low, there’s nothing but gains to be had.
Click to expand...
Click to collapse
http://alienbabeltech.com/main/?p=19309
http://alienbabeltech.com/main/?p=17125
its a good read for noobs like me, also read the comments as there is lots of constructive criticism [that actually adds to the information in the article]

Kind of wild to come across people quoting me when I'm just Googling the web for more info.
I'd just like to point out that I was probably wrong on the entire first part about the 3640. I can't post links yet, but Google "Android phones benchmarked; it's official, the Galaxy S is the fastest." for my blog article on why.
And the reason I'm out here poking around for more information is because AnandTech.com (well known for their accurate and detailed articles) just repeatedly described the SoC in the Droid X as a OMAP 3630 instead of the 3640.
EDIT - I've just found a blog on TI's website that calls it a 3630. I guess that's that! I need to find a TI engineer to make friends with for some inside info.
Anyhow, thanks for linking my work!

Make no mistake, OMAP 3xxx series get left in the dust by the Hummingbird.
Also, I wouldn't really say that Samsung hired Intrinsity to make the CPU - they worked together. Intrinsity is owned by Apple, the Hummingbird is the same core as the A4, but with a faster graphics processor - the PowerVR SGX 540.
There was a bug in the Galaxy S unit they tested, which was later confirmed in the authors own comments later on.

Related

Which Processor is faster & better

"Intel Bulverde 520 MHz"
The one in the Universal
OR
"Qualcomm MSM7201A 528 Mhz"
in the new HTC HD unit
I feel they are the same. Am I right?
qualcomm is much better
Its similar the difference between a 2.5ghz Pentium 4 and a 2.5ghz Core2Solo
i don't think that core2solo and pentium4 with ht much differ
l2tp said:
i don't think that core2solo and pentium4 with ht much differ
Click to expand...
Click to collapse
Google up "Instructions per second" and you'll understand.
The Netburst architec of P4 is one of the worst example in history of it. A failure by engineering standard.
The PXA270 Processor in the Universal actually runs at 624mhz and is underclocked. The HTC X7500 uses the same CPU running at 624mhz. It is clearly the better CPU.
genetik_freak said:
The PXA270 Processor in the Universal actually runs at 624mhz and is underclocked. The HTC X7500 uses the same CPU running at 624mhz. It is clearly the better CPU.
Click to expand...
Click to collapse
Very, very wrong.
I wouldn't say that the Intel two processors are exactly the same, with one just being underclocked via software. Notice how intel puts out multiple pentiums of a given generation at different speeds? Would you venture to say that all those chips are the same too?
Also, clock speed is a poor metric when comparing chips from different companies. PDADB.Net says that the Intel chip has a ARMv5TE instruction set and the Qualcom chip has a ARMv6 instruction set. The Intel is a generation behind.
Comparing
Wikipedia says
Main article: Megahertz myth
The clock rate of a computer is only useful for providing comparisons between computer chips in the same processor family. An IBM PC with an Intel 486 CPU running at 50 MHz will be about twice as fast as one with the same CPU, memory and display running at 25 MHz, while the same will not be true for MIPS R4000 running at the same clock rate as the two are different processors with different functionality. Furthermore, there are many other factors to consider when comparing the speeds of entire computers, like the clock rate of the computer's front side bus (FSB), the clock rate of the RAM, the width in bits of the CPU's bus and the amount of Level 1, Level 2 and Level 3 cache.
Clock rates should not be used when comparing different computers or different processor families. Rather, some software benchmark should be used. Clock rates can be very misleading since the amount of work different computer chips can do in one cycle varies. For example, RISC CPUs tend to have simpler instructions than CISC CPUs (but higher clock rates), and superscalar processors can execute more than one instruction per cycle (on average), yet it is not uncommon for them to do "less" in a clock cycle. In addition, subscalar CPUs or use of parallelism can also affect the quality of the computer regardless of clock rate.
Click to expand...
Click to collapse
Sonus you are correct about the Mhz comparison. However, the PXA270 in the Universal can be safely "overclocked" to 624Mhz because the chip is designed to max out at that speed.
I would still like to see some benchmark tests between the 624Mhz PXA270, and the 528Mhz Qualcomm MSM7201A.
Generations aside, I can't see the Qualcomm chip outperforming the Intel Chip by much, if any. Also, it should be noted that the PXA270 can be scaled, not sure if that is true for the MSM7201A.
The other catch phrase is "Performance per watt". I bet the MSM7201A has a huge advantage over PXA27x in that, mainly due to newer manufacturing process.
That may be true wuzy, but considering the PXA270 is almost 5 years old and still being used in new devices should tell you plenty about its capabilities and performance.
Not really... It does, however tell a lot about the stinginess of device manufacturers.
As for the overclocking, not every Universal can run 624 MHz without crashing because the CPUs are going through a selection process after manufacturing and there is simply no reason to use the best ones for a device that doesn't need them running at full speed.
The crashes are usually the result of the type of program used to overclock and also the rom. For the most part, people have found that 624mhz is pretty stable, inlcuding myself. Some have even pushed it beyond that speed, but that's another story...
Also take this into consideration:
The Universal has been on the market since 2005, almost 4 years now. By industry standards, it should be obsolete. Why is it not then? Simply, it is quite inexpensive compared to the newer devices having similar features, sometimes less. When it comes to performance vs. price vs. features, you just cannot beat the value of the Universal and its blistering fast 520/624mhz PXA270 CPU! The PXA270's performance is only rivaled by its bigger brother, the 800Mhz PXA320 which has made its way into some newer devices already.
genetik_freak said:
That may be true wuzy, but considering the PXA270 is almost 5 years old and still being used in new devices should tell you plenty about its capabilities and performance.
Click to expand...
Click to collapse
Try out a Diamond/Touch Pro with Opera9.5 the next time you see one and notice the speed difference.
On MSM7201A compared to our PXA27x it's a lot more smoother.
The lack of driver for MSM7200 on a lot of devices released last year tainted our perception on the new generation chips I think.
Touch HD vs. ASUS Galaxy7 at end of the year... hmmm
I think you're missing the point wuzy.
I know there are newer devices out now that can deliver slightly better performance in some areas than the Universal, but considering how old our device is, it is to be expected. All I'm saying is that given the age of the Universal compared to what's out there now, The Universal has held up well. Furthermore, with all the new cooked roms popping up, you can expect the Uni to live even longer!
Take a look at H.264 decompression and real high performance tasks and the PXA270 looses so badly against the PXA320 that it is not even funny anymore...
Why does the Uni keep up with most software? Because most programs are written for the old ARMv4 instruction set, thus wasting a lot of CPU cycles on newer processors that have already moved on. Apart from that the average application simply does not need that much CPU power to begin with.
The Uni held out well in a market that is very slow to adapt new technologies to begin with. The Axim x50v had a dedicated graphics chip at the end of 2004 - how many applications make use of that today? Only some games (ports, emulators) and media players. For those alone the Axim has held out better than the Uni though as it is still one of the best performing PPCs on the market.
Our little one will be around for quite a while, but it is far, far away from what nowadays devices can offer and it shows if you run anything beyond mail and office apps on it.
Which Processor is faster & better
I feel from your input above that "Qualcomm MSM7201A 528 Mhz" has higher performance, clock rete, Instructions per second, & Performance per watt when compared to the "Intel Bulverde 520 MHz" about 2:1 am I right ?
Another Question:
What is the highest speed Processor available for the PDA industry today?
Best Regards.
IMHO the ARM Cortex processors are very far up the ladder when it comes to performance and energy consumption. The Pandora makers claim 10 hours of runtime for their device. Together with its media chip this little bugger is capable of decoding 720p HD video streams (take a look at the Archos 5)
I am not sure if the MSM7201A chipset's CPU alone reaches twice the performance of the Uni, but you will see a huge difference in apps that support and need the latest in CPU architecture (media players & games). If (one way or the other) the 3D capabilities can be put to use you will probably see more than a 2:1 performance boost.
The sad truth is the Universal is one of the slowest VGA devices around. Especially considering lack of the graphical accelerator (which was even present in prototypes).
Too bad the dedicated 3D chip didn't make it into the final design. But it's still better than having a 3D accelerator without drivers! I have a Sharp EM-ONE here with a GoForce 5500 that could theoretically accelerate many video formats. The sad truth is that because there are no drivers no media player can make use of the chip. Even worse: Because the graphics chip still controls the display video is even slower because the optimized X-Scale drivers can't be used. It's like Sharp and NVidia wanted to punish users double So, as bad as it is, the Uni is not the worst device out there!
x86
I wonder why there´re no x86 cpu´s placed in mobile devices yet. maybe because of the high power consumption? x86 cpu´s running at 528mhz would be more powerful than arm cpu´s. furthermore the device could run x86 os like xp embedded with more features and capabilities...
x86 CPU enabled systems are still too much power hungry and too much complicated to be used in such a small device (sounds weird when talking abut HTC Universal, doesn't it).

Tegra or Snapdragon

Hi everybody, I just have some questions.
I plan to change my HTC Hermes next year but I don't know which based-device will be the best...
Snapdragon or Tegra.
Tegra seems to have 8core of execution for great graphics but not a big frequency(600-800Mhz). Snapdragon got the Ghz and is supposed to reach 1.3Ghz in 2010. There is also a dual core snapdragon 2x1.5Ghz supposed to be available this year but will it be for smartphones?
These are the questions I have because a PDA is a lot of money for me and I wanna choose the right device...
Thanks
Well snapdragon is multi core SoC just like Tegra but what nvidia is so proud of is power island. It means that they can shut off unneeded module(ex. turn off all modules except of modem when in standby). Tegra uses ARM11 CPU where snapdragon is based on improved cortex A8 besides it is clocked at 1Ghz so tegra can't win this one. GPU is better on tegra and probably video performance is better too but when it comes to brute force snapdragon wins hands down.
I think that is all you need to know about tegra and snapdragon. About that 2x1,5Ghz snapdragon it is designed to be used on smartbooks. It would be an overkill for smartphone at least for now.
Thanks that's all I wanted to know
also a Mhz is not just a Mhz
first of all a qualcomm mhz could mean more or less performance boots then a OMAP mhz
not to mention it don't really matter if the cpu is super fast if the ram and storage and other IO of a device can't keep up
joplayer said:
Tegra seems to have 8core of execution for great graphics but not a big frequency(600-800Mhz). Snapdragon got the Ghz and is supposed to reach 1.3Ghz in 2010.
Click to expand...
Click to collapse
Tegra is just like the Snapdragon a SoC. If we use the same logic that Nvidia used, then the Snapdragon is also a multi core SoC ( CPU, GPU, DSP, ... ). But its just marketing to make it look to people that they get a 8 Cpu system
Like Wishmaster89 pointed out, there is a major difference between the CPU's used on both system.
The 600Mhz Arm11 ( ArmV6 ) on the Tegra is capable off executing, about 1/3th what the Snapdragon's ArmV7 1Ghz Cpu can do.
The GPU on the other hand, is more powerful in the Tegra. There is a little list being used to compare the overall ( theoretical ) strengths off each platform's GPU
Nintendo DS: 120,000 triangles/s, 30 M pixels/s
PowerVR MBX-Lite (iPhone 3G): 1 M triangles/s, 100 M pixels/s
Samsung S3C6410 (Omnia II): 4 M triangles/s, 125.6 M pixels/s
ATI Imageon (Qualcomm MSM72xx): 4 M triangles/s, 133 M pixels/s
PowerVR SGX 530 (Palm Pre): 14 M triangles/s, ___ M pixels/s
ATI Imageon Z430 (Toshiba TG01): 22 M triangles/s, 133 M pixels/s
PowerVR SGX 535 (iPhone 3GS): 28 M triangles/s, 400 M pixels/s
Sony PSP: 33 M triangles/s, 664 M pixels/s
PowerVR SGX 540 (TI OMAP4): 35 M triangles/s, 1000 M pixels/s
Nvidia Tegra APX2500 (Zune HD): 40 M triangles/s, 600 M pixels/s
ATI Imageon _ (Qualcomm QSD8672): 80 M triangles/s, >500 M pixels/s
Click to expand...
Click to collapse
So, the Tegra's GPU is about twice as powerful as the Snapdragon's ATI Z430 ( looking at Triangles ). The reason why i use the term theoretically is because a lot off factors can make or break a GPU ( many more then on a CPU ). Bad drivers, bandwidth limitations, to little memory, bad mix off texture units, vertex units etc..
Problem with Nvidia is, they have always had the habit off exaggerating things ( a lesson learned more then a few times in the past ).
Another problem is, are the GPU's actually being used on the PDA/Smartphone's? A lesson i learned in the past from the x50v, with its own dedicated powerful ( in that time ) 2700g ( 800.000 Triangles in that time ). The reality is, most applications rely the most on the CPU.
At best, if you have dedicated games, written for the PDA/Smartphone market, very few will tap in to all the power that the Tegra has to offer.
Even the PSX Emulators ( who run great ( full speed 50/60fps pal/ntsc games ) ) on the Snapdragon. Forget about running a lot off psx games on a Arm11 without tweaking ( and frame skipping ). Because it relies the most on brute force cpu power ( and this is where the Snapdragon shines ).
So? What is there besides games? Video playback? Sure... The Tegra can supposedly do 1080p, while the TI OMAP & Snapdragon's only do 720p. But from what i have read, its more to the DSP that does the work. The snapdragon's DSP runs at 600Mhz, i don't find any information about the Tegra's DSP? Does it even have any? Anybody with more info how they even handle things?
When it comes down to PDA/Smartphone's... take it from me. The most important thing is first the CPU. Then the amount off memory ( and memory speed ). Then the GPU.
Lets just say i like to see a fair comparison between both systems, to see there real power ( and not some nvidia fake PR where a lot off people still fall in ).
Like i said, i don't exactly trust Nvidia's numbers when there PR posts crap like this:
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Those numbers are what you can call a pure lie. When people from the OpenPandora project ( what uses a TI Omap3630 @ 600Mhz, with a slower GPU ), is able to run quake3 at 35+ fps... Yet, Nvidia claims 5fps for the Snapdragon, thats actually more powerful then the TI Imap3630... I love those little [*] next to the text... Small text below: "* NVIDIA estimates". In other words, how much trust can somebody place in the specs from a company that that pulls stunts like that.
Also... Snapdragon is used in the following smartphones that i know off: Toshiba TG01, Asus F1 ( S200 ), HTC HD2 ( Leo ), and a few more that are on the way. Where is the Tegra? The MS Zune... Thats it...
You think that HTC, Toshiba, Asus will all have looked at the different available SOC providers ( TI, qualcomm, Samsung, Nvidia etc ). Yet ... Who do they pick for there new top off the line products...
I hope this helps...
OP, therw isn't much to add after all that expert info, but I can make it easy for you. SD = raw power, Tegra = fancy graphics. I prefer power, because of the better overall performance.
as i see it the tegra chip has 2 600mhz cores + 6 other cores to do video, audio etc.
so a 1ghz snapdragon would have to split it mhz to deal with any audio, video etc whilst the tegra chip would have separate cores dealing with this stuff leaving 2 600mhz cores free.
this would make tegra a lot faster than snapdragon.
one thing which would be interesting would be batt life
in various situations
and excluding the atom as it's not really a phone cpu
one thing of note is that every snapdragon phone, although seems fast still has the standard wm lag at times (probably more wm that the cpu).
whilst the zune hd looks super smooth and very fast.
we will have to wait for the first tegra wm phone to see if it has the wm lag as its hard to tell by comparing a mp3/4 player (which has a os which was probably made from the ground up to run on the chip) to a phone.
Ganondolf said:
as i see it the tegra chip has 2 600mhz cores + 6 other cores to do video, audio etc.
so a 1ghz snapdragon would have to split it mhz to deal with any audio, video etc whilst the tegra chip would have separate cores dealing with this stuff leaving 2 600mhz cores free.
this would make tegra a lot faster than snapdragon.
Click to expand...
Click to collapse
You're completely wrong! As I said both are multi core SoC's. Both snapdragon and tegra have separate cores for video and audio! The only difference is that tegra can shut off unneeded module where snapdragon can't. Besides they know that their CPU is slow so they have to give people something that will make them forget about CPU so they decided that talking about 8 cores on something as small as their SoC would be a good choice.
As I said before raw CPU power of snapdragon is at least 3x greater than tegra and zune HD is smoother because all the work is done on the GPU(besides the whole Zune OS 4.0 was probably designed on tegra so don't expect it to lag) where WM is only CPU driven. Besides wait for HTC Leo to see almost lag free device(show me device that never lags).
For the last time. For know tegra has slow CPU where Snapdragon has a beast for CPU. Things should change with tegra2 and snapdragon2.
Ganondolf said:
as i see it the tegra chip has 2 600mhz cores + 6 other cores to do video, audio etc.
so a 1ghz snapdragon would have to split it mhz to deal with any audio, video etc whilst the tegra chip would have separate cores dealing with this stuff leaving 2 600mhz cores free.
this would make tegra a lot faster than snapdragon.
Click to expand...
Click to collapse
*uch* So much misinformation... I may not be a expert, but you just claimed that the Snapdragon needs to split its mhz, to do ... video? Did you even read that snapdragon's specs. Dedicated ... GPU. GPU = Video!
Another wrong point, is that both cores are not at 600Mhz. One core is at 600Mhz, and one Core is at 400Mhz. The 600Mhz core is a ARM11 core, and the 400Mhz, is a Arm7 core ( not to be confused with the ArmV7 aka Cortex A8 ).
The basic idea is, when a phone is in standby, that the 400Mhz Arm 7 core, does the basic staying alive stuff. Where as the 600 Arm11 core, is only used for the big stuff. The basic idea is good.
But, the Snapdragon 1Ghz ArmV7 Cpu is able to downscale, and reduce its power footprint also. What solution is the better one ... We will needs to see.
To put things in perspective:
Tegra:
* ARM 11
* ARM 7
* GPU
* 2D Engine
* HD Video Encoder
* HD Video Decoder
* Audio
* Imaging
Snapdragon
* ARM v7 ( Cortex A8 )
* GPU
* DSP
* HD Video Decoder
* ...
Now... You will say. Hey, look at all those extra cores that the Tegra has. Must be a power house... No ... It does not work like that.
The Snapdragon's 600Mhz DSP has several capabilities, including dedicated Image processing, etc. The question is, how fast is the Image processor for the Tegra? If its a separate core, it has its own frequency. This alone make a big difference, because the slow that core, the longer it takes to do the job ( and the more power drain ).
The 600Mhz Tegra that we are comparing here, has only a 720p output capability. Just like the Snapdragon. As far as i can tell, the Tegra 600 is used in the Zune. Something tells me that the Tegra 650 is more for notebooks.
HD Encoding / HD decoding. By any definition, that is part off the GPU. Just like the ATI Z430 has its own dedicated HD capabilities. And any GPU these days has the ability to disable part off its to save power. So we can assume that the same capability is in the mobile variant. The Z430 is based on the GPU found in the x360. It has its own HD, audio, media, etc processing capabilites ( aka, if you like to call it in Nvidia's term... HD, Audio, Media Core's ).
So, from a technical point of view, the Snapdragon has also 8 cores. Hell, we can trump that, because the DSP is capable off more then just Image processing. So, how many extra cores can be gain from that?
To be honest, there is so much misinformation that people jump on... Its actually kinda incredible ( and frightening )... While i need to admit, when looking at the Google links, Nvidia did a good job at spreading the FUBAR information. Most sites took over the information, without questioning it one little bit...
Lag?
And Ganondolf regarding the lag that you report? To be honest, i have shown several movies to a friend with WM6.5 + Touchflow backported on older HTC devices ( devices with the same slow cpu's, like the Tegra uses ). Guess what... Beyond a bit off lag on the Image viewer, they had no lag.
Take a look at the Video's off the HTC HD2 ( Snapdragon ) ... And find the lag there please...
I have seen a few people like you before on other forum's, going around all high & mighty about the Tegra. At first i was impressed by its general specs. Until you start to look deeper, and discover that the CPU is slow as hell ( and the second one is even worse ) compared to the Snapdragon / Cortex A8 / ArmV7 design. That the "extra" cores, are just functionality provided from the GPU. And that its 1080p claim, does not come from the version now used.
In fact, Snapdragon also has 1080p capability. See the QSD8672. But you will not find that SmartPhone's just yet. Just like the Tegra 650 with its 1080p. Has anybody even seen a Tegra 650 on the market? I don't think so ( for good reason ). Looks like another Paper launch from Nvidia.
Simply put:
As of July, 2009 or Oct 2009 for that matter:
Snapdragon mobile phones = shipping.
Tegra mobile phones = vapourware. (not even any firm rumours)
Benjiro said:
Lag?
And Ganondolf regarding the lag that you report? To be honest, i have shown several movies to a friend with WM6.5 + Touchflow backported on older HTC devices ( devices with the same slow cpu's, like the Tegra uses ). Guess what... Beyond a bit off lag on the Image viewer, they had no lag.
Take a look at the Video's off the HTC HD2 ( Snapdragon ) ... And find the lag there please...
Click to expand...
Click to collapse
the lag i was talking about was on the toshiba tg01 which i have played with. there is no point saying look at videos of the htc hd2 as i saw vids of the tg01 which looked like it was lag free, till the hd2 comes out and i have a play i (we) wont be able to tell if its lag free or not. as i can see u are making your argument about lag on a phone that has not been released which i think is a rubbish argument, as someone could say a tegra phone could teleport you across the world (there is no proof).
Also im not on the tegra bandwagon as i like snapdragon just as much, i was going by what i had heard on the net. maybe like you said information has been made to look like the tegra chip is super powerful compared to all the other phone cpu's, what is not true but till i see a phone with a tegra chip in it how would we know?
agitprop said:
Simply put:
As of July, 2009 or Oct 2009 for that matter:
Snapdragon mobile phones = shipping.
Tegra mobile phones = vapourware. (not even any firm rumours)
Click to expand...
Click to collapse
By far the most important point.
Far more important than the MHz number which may or may not even indicate greater or lesser performance or battery life than a competitor with an entirely different architecture.
There is one piece of info that I haven't been able to find. Which one of the two has better performance when it comes to battery power usage?
Anyone?
Tegra is right on the ball.
Yes, the ARM11 cpu is theoretically 1/3 the speed of the Cortex but don't forget there's an ARM7 offloading network traffic, 2D acceleration separate from the CPU and GPU, dedicated HD encoding hardware (decoding is common on both) and sound acceleration. Many of the processing bottlenecks in a mobile device are successfully offloaded in the tegra, ultimately giving the ARM11 less tasks to cope with in the first place, and no need for thread balancing which, fingers crossed, leads to more stable os performance. Another thing to note is that nVidia's official specs say ARM11 MPCore, which means that various tegra chips could have anywhere from 1 to 4 ARM11 cores (the tegra chipset used in the Microsoft Zune player was a duel-core ARM11).
The main point though I think is the power. You don't need a massive CPU in a mobile device, what you need is battery life, which although we haven't received final figures, the tegra is looking infinitely more impressive than anything else on the market. If my iPhone 3GS is anything to go off even x2 the battery life would be welcome, this thing dies in no time at all be it browsing the web, playing video or music; reviews show snapdragon phones to be even worse than this. The nVidia specs regarding battery in earlier posts are mostly accurate but based on a netbook battery. The Zune HD running the tegra has 33hours of audio, 8.5 hours of video, however uses only a 660mAh battery; this is half the size of the battery on the iPhone 3GS and HTC Touch HD2 for example.
The tegra GPU is a powerful CUDA based design and will allow for GPGPU acceleration of the only major computationally intensive task that phones are likely to do in the future which is image processing for augmented reality.
They've provided on-chip support for most modern input/output devices.
nVidia have covered all the bases, I'm seriously looking forward to tegra phones.
Yes, but as I've learned (the hard way) from my Touch Pro, all the features in the world mean nothing if they're not used. Touch Pro was supposed to have video acceleration and double the speed of my old Tytn. Where are those? Nowhere. Why? Some say "there aren't any drivers for the GPU", others say that TPs processor may be 500MHz, but its design is worse than the one in my older Tytn...
I don't care. As a customer, user and buyer, I know that my older phone was faster than my new one. If in the near future we have a Snapdragon 1GHz phone that does everything in its CPU and a Tegra phone that ballances cpu-gpu-physics-whatever in different parts of its design, history says that the Snapdragon will be the better choice. You see, WM Solitaire, Word Mobile, RSS Readers, Twitter clients and all existing software, at least for WM, is written to run on a single processor. I've yet to see a good program/game that will actually take advantage of any devices GPU - and that won't happen while the market is split, for a developer would need to create his program for a specific device (meaning less profit) or simply forego any acceleration and create something "that runs anywhere". We can thank Microsoft for going the Linux way and advocating device makers doing whatever they want, whichever way they want, without some standard way of using different hardware parts (like, say, DirectX in Windows).
very interesting informations.
Battery life is really important, that's at the moment the only advantage of the Tegra vs SN.
I am really keen to know if Manila works also fast with less CPU-Power of the Tegra-Chip as the Leo.
There must be some driver or software problem I would say - because there's no PDA out with the Tegra.
Also no announcement... otherwhise it could be also a strategy from HTC that they didn't get a problem in selling the Leo and oncoming Android-device.
So we must w8...
I think you guys should see PGR on the Zune HD.
Stunning graphics.
For me the processor speed will come 2nd place to functionality. I have recently started to use the remote desktop on my HD, but wish it had a TV out like my Touch Pro.
I was thinking about upgrading to a Leo but that has no TV also.
Discussing advanced graphics for a Snapdragon is not helpful if you are restricted to 4 inches.
Hopefully HTC will put HDMI or at least video out on all future devices. The resolution of the devices is upto it, so why not.

Our Next Phone...

Looking back, when I switch phones it is usually when there is a better device out with a significant improvement over my current device. My first smartphone was the Tmobile MDA (HTC Wizard), which I bought roughly 5 years ago. The next phone was the Tmobile Wing (HTC Atlas), with a much smaller form factor and faster CPU the device was a great improvement.
My next device was my first real HTC marketed phone, the Touch Diamond. The diamond, was a complete overhaul from the other two HTC phones I used. I loved every little part of it. But going from the Diamond to the glamorous HD2 was even more amazing, the screen, the size everything was perfect.
Now the question I have is that it is almost a year that the HD2 has been out and I ready to get a new phone, but I am wondering about what things I should consider.
I dont think that the Droid X, or the Galaxy S smart phones are really all that much better than the HD2, so I am more interested in the Cortex-A9 phones that are slowly trickling into the market.
The CPUs that will have Cortex-A9 dual core tech are as follows:
Nvidia
Tegra 2
1Ghz
Custom High Profile Graphics
(Motorola Olympus, LG Star)
Qaulcomm
Snapdragon 3rd Gen
1.2GHz/1.5GHz
Adreno 220
Verizon HTC Phone
Samsung
Orion
1GHz
Mali 400
(Nexus S)
Texas Instruments
OMAP 4
1GHz+
PowerVR SGX 540
(Pandaboard)
Marvell
Armada 628
1.5GHz + Custom 624MHz DSP
Custom High Profile Graphics
ST-Erricson
U8500
1.2GHz
Mali 400
So basically what should I do? Wait for all of them to come out and then decide, or get which one comes first.
I want the best processing power with the greatest graphics, and was thinking on Tegra 2 but found that Open GL ES benchmarks have low values for the Tergra2 platform lower than the SGX 540.
Galaxy Tab Results:
http://www.glbenchmark.com/phonedetails.jsp?D=Samsung GT-P1000 Galaxy Tab&benchmark=glpro11
Folio 100:
http://www.glbenchmark.com/phonedetails.jsp?D=Toshiba Folio 100&benchmark=glpro11
Are these a result of poor drivers or is Tegra really weaker than the SGX 540, (and thus weaker than the Mali 400)?????
Is the Nexus S a better choice than the Motorola Olympus, or should I wait for HTC's addition to the game with a 3rd gen Snappy. Will the adreno 220 GPU out power the Tegra 2 and Mali 400. What do you guys think, and what do you plan on doing.
Well firstly better hardware means nothing if the software is the bottleneck. Secondly, we've seen often the grunt of the cpu is more contributive to performance of programs than the gpu in Android OS. Thirdly, you're going to have to wait, see, buy, test these platforms to know which ones are superior... but here is what I've discovered during the course of 2010.
SoC's for 2011:
(listed in what I believe is the best to the worse)
+ ARM Sparrow: Dual-core Cortex A9 @2.00GHz (on 32nm die), unspecified GPU
+ TI OMAP 4440: Dual-core Cortex A9 @1.5GHz, SGX 540 (90M t/s)
+ Apple A5 (iPad2): Dual-core Cortex A9 @0.9GHz, SGX 543MP2 (130M-150M t/s)
+ Qualcomm MSM8660 (Gen IV Snapdragon): Dual-core Cortex A9 @1.5GHz, Adreno 220 (88M t/s)
+ TI OMAP 4430: Dual-core Cortex A9 @1GHz, SGX 540 (90M t/s)
+ ST-Ericson U8500: Dual-core Cortex A9 @1.2GHz, ARM Mali 400 (50-80M t/s)
+ Samsung Orion: Dual-core Cortex A9 @1GHz, ARM Mali 400 (50-80M t/s)
+ Nvidia Tegra 2: Dual-core Cortex A9 @1GHz, nVidia ULP-GeForce (71M t/s)
+ Qualcomm Scorpion (Gen III Snapdragon): Dual-core Cortex A8 @1.2GHz, Adreno 220 (88M t/s)
Notes: The SGX530 is roughly half the speed as the SGX535. The SGX540 is twice as fast as the SGX535. The Adreno 205(41M tri/sec) is supposedly faster than the SGX535 but slower than the SGX540 (thus, is likely to be in the mid). The Adreno 220 is twice the speed of the Adreno 205 but it is slightly slower than SGX540(88M vs 90M tri/sec). Samsung claims ARM Mali 400 to be 5 times faster than its previous GPU (S3C6410 - 4M tri/sec), about on par (80M tri/sec) with the Adreno 220, but few leaks benchmarked it to be only slighlty faster than the SGX535 (40M tri/sec). The gpu used in the Nvidia Tegra2 has been quite contained (little known). I estimated the Tegra2 has 71M t/sec (Tegra 2 Neocore=27fps/55fps=Galaxy S Neocore, x62% disadvantage of screen resolution, x 90Mt/s of SGX540 = 71M t/s). And recently some inside rumors via fudzilla actually confirmed this exact figure, so therefore the gpu-chip inside the Tegra2 is roughly equivalent to the MALI 400.
All of these details are based on officially announced, rumors from trustworthy sources and logical estimations, so discrepancies can be existent.
Last thoughts: As you can see there is some diversity in the next-gen chips (soon to-be current-gen), where the top tier (OMAP 4440) is roughly 1.5 times more powerful than the low tier (Tegra 2). However drivers and software will play a lead-role in determining which device could squeeze out the most performance. And this factor may alone favour the iPad2, Playbook or even MeeGo tablets to be better than the Honeycomb tablets which are somewhat bottleneck-ed by the lack of hardware accelaration and post-transcription through the Dalvik VM. I think we've hit the point where we could have some really impressive high definition entertainment, and even emulating the Dreamcast at decent/fullspeed.
edit2: Well, Apple's been boasting over x9 the graphical performance over the original iPad. There are 2 articles on anadtech, one in Geekbench and a processor-specific details from imgtech (I dug up from 12months ago). It has been found that its a modified Cortex A9, 512MB RAM and the SGX543MP2. Everything points to the SGX543MP2 being significantly faster than the SGX540, and the given number was 133 Million Polygons per second (theoretical) for SGX543MP4 which is double SGX543MP2 performance. The practical figure is always less. Imgtech said the SGX540 is double the grunt of the SGX535, benchmarks show the SGX543MP2 is (on average) five times the grunt as the iPad (SGX535). So going by imgtech (the designer of sgx chips), the theoretical value that I list above, should be 70M t/s ... going by Apple's claim it should be 200M t/s ... going by benchmarks it should be roughly 130 M t/s. Imgtech's value is definently wrong since they claimed its faster than the SGX540 valued at 90M t/s. Apple's claim also seems biased, they take only the best possible conditions and exaggerate it even more. It seems to be somewhere in between, and wouldn't you know it, the average of the two "false" claims is equivalent to the benchmarked value
edit3: The benchmarks are out for the 4th-gen QSD, which confirms everything prior. It's competing for top place against the 4440 and A5. I've changed the post (only updated chip's name).
If one were to choose between the processor of the A5 and the OMAP4440, they'd be really pressed to choose between more cpu grunt or more gpu grunt.
Just re-edited the post.
Apple's A5 details are added in, its looks to be one of the best chips for the year.
If I had to choose between the OMAP4440 and A5, I probably would be reduced to a head-tail coin flip!
Update:
The benchmark results of the Snapdragon MSM8660 are in.... and it goes further to support the list.
MSM660 = Dualcore A9 + Adreno 220 + Qualcomm modification (for better/worse).

TEGRA 4 - 1st possible GLBenchmark!!!!!!!! - READ ON

Who has been excited by the Tegra 4 rumours?, last night's Nvidia CES announcement was good, but what we really want are cold-hard BENCHMARKS.
I found an interesting mention of Tegra T114 SoC on a Linux Kernel site, which I've never heard of. I got really interested when it stated that the SoC is based on ARM A15 MP, it must be Tegra 4. I checked the background of the person who posted the kernel patch, he is a senior Nvidia Kernel engineer based in Finland.
https://lkml.org/lkml/2012/12/20/99
"This patchset adds initial support for the NVIDIA's new Tegra 114
SoC (T114) based on the ARM Cortex-A15 MP. It has the minimal support
to allow the kernel to boot up into shell console. This can be used as
a basis for adding other device drivers for this SoC. Currently there
are 2 evaluation boards available, "Dalmore" and "Pluto"."
On the off chance I decided to search www.glbenchmark.com for the 2 board names, Dalmore (a tasty whisky!) and Pluto (Planet, Greek God and cartoon dog!) Pluto returned nothing, but Dalmore returned a device called 'Dalmore Dalmore' that was posted on 3rd January 2013. However the OP had already deleted them, but thanks to Google Cache I found the results
RESULTS
GL_VENDOR NVIDIA Corporation
GL_VERSION OpenGL ES 2.0 17.01235
GL_RENDERER NVIDIA Tegra
From System spec, It runs Android 4.2.1, a Min frequency of 51 MHz and Max of 1836 Mhz
Nvidia DALMORE
GLBenchmark 2.5 Egypt HD C24Z16 - Offscreen (1080p) : 32.6 fps
iPad 4
GLBenchmark 2.5 Egypt HD C24Z16 - Offscreen (1080p): 49.6 fps
CONCLUSION
Anandtech has posted that Tegra 4 doesn't use unified shaders, so it's not based on Kepler. I reckon that if Nvidia had a brand new GPU they would have shouted about it at CES, the results I've found indicate that Tegra 4 is between 1 to 3 times faster than Tegra 3.
BUT, this is not 100% guaranteed to be a Tegra 4 system, but the evidence is strong that it is a T4 development board. If this is correct, we have to figure that it is running beta drivers, Nexus 10 is ~ 10% faster than the Arndale dev board with the same Exynos 5250 SoC. Even if Tegra 4 gets better drivers, it seems like the SGX 544 MP4 in the A6X is still the faster GPU, with Tegra 4 and Mali T604 being an almost equal 2nd. Nvidia has said that T4 is faster than A6X, but the devil is in the detail, in CPU benchmarks I can see that being true, but not for graphics.
UPDATE - Just to add to the feeling that that this legit, the GLBenchmark - System section lists the "android.os.Build.USER" as buildbrain. Buildbrain according to a Nvidia job posting is "Buildbrain is a mission-critical, multi-tier distributed computing system that performs mobile builds and automated tests each day, enabling NVIDIA's high performance development teams across the globe to develop and deliver NVIDIA's mobile product line"
http://jobsearch.naukri.com/job-lis...INEER-Nvidia-Corporation--2-to-4-130812500024
I posted the webcache links to GLBenchmark pages below, if they disappear from cache, I've saved a copy of the webpages, which I can upload, Enjoy
GL BENCHMARK - High Level
http://webcache.googleusercontent.c...p?D=Dalmore+Dalmore+&cd=1&hl=en&ct=clnk&gl=uk
GL BENCHMARK - Low Level
http://webcache.googleusercontent.c...e&testgroup=lowlevel&cd=1&hl=en&ct=clnk&gl=uk
GL BENCHMARK - GL CONFIG
http://webcache.googleusercontent.c...Dalmore&testgroup=gl&cd=1&hl=en&ct=clnk&gl=uk
GL BENCHMARK - EGL CONFIG
http://webcache.googleusercontent.c...almore&testgroup=egl&cd=1&hl=en&ct=clnk&gl=uk
GL BENCHMARK - SYSTEM
http://webcache.googleusercontent.c...ore&testgroup=system&cd=1&hl=en&ct=clnk&gl=uk
OFFSCREEN RESULTS
http://webcache.googleusercontent.c...enchmark.com+dalmore&cd=4&hl=en&ct=clnk&gl=uk
http://www.anandtech.com/show/6550/...00-5th-core-is-a15-28nm-hpm-ue-category-3-lte
Is there any Gpu that could outperform iPad4 before iPad5 comes out? adreno 320, t Mali 604 now tegra 4 aren't near it. Qualcomm won't release anything till q4 I guess, and tegra 4 has released too only thing that is left is I guess is t Mali 658 coming with exynos 5450 (doubtfully when it would release, not sure it will be better )
Looks like apple will hold the crown in future too .
i9100g user said:
Is there any Gpu that could outperform iPad4 before iPad5 comes out? adreno 320, t Mali 604 now tegra 4 aren't near it. Qualcomm won't release anything till q4 I guess, and tegra 4 has released too only thing that is left is I guess is t Mali 658 coming with exynos 5450 (doubtfully when it would release, not sure it will be better )
Looks like apple will hold the crown in future too .
Click to expand...
Click to collapse
There was a great article on Anandtech that tested the power consumption of the Nexus 10's Exynos 5250 SoC, it showed that both the CPU and GPU had a TDP of 4W, making a theoretical SoC TDP of 8W. However when the GPU was being stressed by running a game, they ran a CPU benchmark in the background, the SoC quickly went up to 8W, but the CPU was quickly throttled from 1.7 GHz to just 800 Mhz as the system tried to keep everything at 4W or below, this explained why the Nexus 10 didn't benchmark as well as we wished.
Back to the 5450 which should beat the A6X, trouble is it has double the CPU & GPU cores of the 5250 and is clocked higher, even on a more advanced 28nm process, which will lower power consumption I feel that system will often be throttled because of power and maybe heat concerns, so it looks amazing on paper but may disappoint in reality, and a 5450 in smartphone is going to suffer even more.
So why does Apple have an advantage?, well basically money, for a start mapple fans will pay more for their devices, so they afford to design a big SoC and big batteries that may not be profitable to other companies. Tegra 4 is listed as a 80mm2 chip, iPhone 5 is 96mm2 and A6X is 123mm2, Apple can pack more transistors and reap the GPU performance lead, also they chosen graphics supplier Imagination Technologies have excellent products, Power VR Rogue will only increase Apple's GPU lead. They now have their own chip design team, the benefit for them has been their Swift core is almost as powerful as ARM A15, but seems less power hungry, anyway Apple seems to be happy running slower CPUs compared to Android. Until an Android or WP8 or somebody can achieve Apple's margins they will be able to 'buy' their way to GPU domination, as an Android fan it makes me sad:crying:
32fps is no go...lets hope it's not final
hamdir said:
32fps is no go...lets hope it's not final
Click to expand...
Click to collapse
It needs to, but it will be OK for a new Nexus 7
still faster enough for me, I dont game alot on my nexus 7.
I know I'm taking about phones here ... But the iPhone 5 GPU and adreno 320 are very closely matched
Sent from my Nexus 4 using Tapatalk 2
italia0101 said:
I know I'm taking about phones here ... But the iPhone 5 GPU and adreno 320 are very closely matched
Sent from my Nexus 4 using Tapatalk 2
Click to expand...
Click to collapse
From what I remember the iPhone 5 and the new iPad wiped the floor with Nexus 4 and 10. The ST-Ericsson Nova A9600 is likely to have a PowerVR Rogue GPU. Just can't wait!!
adityak28 said:
From what I remember the iPhone 5 and the new iPad wiped the floor with Nexus 4 and 10. The ST-Ericsson Nova A9600 is likely to have a PowerVR Rogue GPU. Just can't wait!!
Click to expand...
Click to collapse
That isn't true , check glbenchmark , in the off screen test the iPhone scored 91 , the nexus 4 scored 88 ... That ksnt wiping my floors
Sent from my Nexus 10 using Tapatalk HD
Its interesting how even though nvidia chips arent the best we still get the best game graphics because of superior optimization through tegra zone. Not even the a6x is as fully optimized.
Sent from my SAMSUNG-SGH-I727 using xda premium
ian1 said:
Its interesting how even though nvidia chips arent the best we still get the best game graphics because of superior optimization through tegra zone. Not even the a6x is as fully optimized.
Sent from my SAMSUNG-SGH-I727 using xda premium
Click to expand...
Click to collapse
What sort of 'optimisation' do you mean? un optimised games lag that's a big letdown and tegra effects can also be used on other phones too with chain fire 3d I use it and tegra games work without lag with effects and I don't have a tegra device
With a tegra device I am restricted to optimised games mostly
The graphic performance of NVIDIA SoCs is always disappointed, sadly for the VGA dominanting provider on the world.
The first Tegra2, the GPU is a little bit better than SGX540 of GalaxyS a little bit in benchmark, but lacking NEON support.
The second one Tegra 3, the GPU is nearly the same as the old Mali400Mp4 in GALAXY S2/Original Note.
And now it's better but still nothing special and outperformed soon (Adreno 330 and next-gen Mali)
Strongest PowerVR GPUs are always the best, but sadly they are exclusive for Apple only (SGX543 and maybe SGX 554 also, only Sony ,who has the cross-licencing with Apple, has it in PS Vita and in PS Vita only)
tegra optimization porting no longer works using chainfire, this is now a myth
did u manage to try shadowgun thd, zombie driver or horn? the answer is no, games that use t3 sdk for physx and other cpu graphics works can not be forced to work on other devices, equally chainfire is now outdated and no longer updated
now about power vr they are only better in real multicore configuration which is only used by apple and Sony's vita, eating large die area, ie actual multicore each with its own subcores/shaders, if tegra was used in real multi core it would destroy all
finally this is really funny all this doom n gloom because of an early discarded development board benchmark, I dont mean to take away from turbo's thunder and his find but truly its ridiculous the amount of negativity its is collecting before any type of final device benchs
adrena 220 doubled in performance after the ICS update on sensation
t3 doubled the speed of t2 gpu with only 50% the number of shaders so how on earth do you believe only 2x the t3 scores with 600% more shaders!!
do you have any idea how miserable the ps3 performed in its early days? even new desktop GeForces perform much less than expected until the drivers are updated
enough with the FUD! seems this board is full of it nowadays and so little reasoning...
For goodness sake, this isn't final hardware, anything could change. Hung2900 knows nothing, what he stated isn't true. Samsung has licensed PowerVR, it isn't just stuck to Apple, just that Samsung prefers using ARMs GPU solution. Another thing I dislike is how everyone is comparing a GPU in the iPad 4 (SGX554MP4) that will NEVER arrive in a phone compared a Tegra 4 which will arrive in a phone. If you check OP link the benchmark was posted on the 3rd of January with different results (18fps then 33fps), so there is a chance it'll rival the iPad 4. I love Tegra as Nvidia is pushing developers to make more better games for Android compared to the 'geeks' *cough* who prefers benchmark results, whats the point of having a powerful GPU if the OEM isn't pushing developers to create enhance effect games for there chip.
Hamdir is correct about the GPUs, if Tegra 3 was around 50-80% faster than Tegra 2 with just 4 more cores, I can't really imagine it only being 2x faster than Tegra 3. Plus its a 28nm (at around 80mm2 just a bit bigger than Tegra 3, smaller than A6 90mm2) along with the dual memory than single on Tegra 2/3.
Turbotab said:
There was a great article on Anandtech that tested the power consumption of the Nexus 10's Exynos 5250 SoC, it showed that both the CPU and GPU had a TDP of 4W, making a theoretical SoC TDP of 8W. However when the GPU was being stressed by running a game, they ran a CPU benchmark in the background, the SoC quickly went up to 8W, but the CPU was quickly throttled from 1.7 GHz to just 800 Mhz as the system tried to keep everything at 4W or below, this explained why the Nexus 10 didn't benchmark as well as we wished.
Back to the 5450 which should beat the A6X, trouble is it has double the CPU & GPU cores of the 5250 and is clocked higher, even on a more advanced 28nm process, which will lower power consumption I feel that system will often be throttled because of power and maybe heat concerns, so it looks amazing on paper but may disappoint in reality, and a 5450 in smartphone is going to suffer even more.
So why does Apple have an advantage?, well basically money, for a start iSheep will pay more for their devices, so they afford to design a big SoC and big batteries that may not be profitable to other companies. Tegra 4 is listed as a 80mm2 chip, iPhone 5 is 96mm2 and A6X is 123mm2, Apple can pack more transistors and reap the GPU performance lead, also they chosen graphics supplier Imagination Technologies have excellent products, Power VR Rogue will only increase Apple's GPU lead. They now have their own chip design team, the benefit for them has been their Swift core is almost as powerful as ARM A15, but seems less power hungry, anyway Apple seems to be happy running slower CPUs compared to Android. Until an Android or WP8 or somebody can achieve Apple's margins they will be able to 'buy' their way to GPU domination, as an Android fan it makes me sad:crying:
Click to expand...
Click to collapse
Well said mate!
I can understand what you feel, nowdays android players like samsung,nvidia are focusing more on CPU than GPU.
If they won't stop soon and continued to use this strategy they will fail.
GPU will become bottleneck and you will not be able use the cpu at its full potential. (Atleast when gaming)
i have Galaxy S2 exynos 4 1.2Ghz and 400mhz oc mali gpu
In my analysis most modern games like MC4,NFS:MW aren't running at 60FPS at all thats because GPU always have 100% workload and CPU is relaxing there by outputing 50-70% of total CPU workload
I know some games aren't optimize for all android devices as opposed to apple devices but still even high-end android devices has slower gpu (than ipad 4 atleast )
AFAIK, Galaxy SIV is likely to pack T-604 with some tweaks instead of mighty T-658 which is still slower than iPAddle 4
Turbotab said:
There was a great article on Anandtech that tested the power consumption of the Nexus 10's Exynos 5250 SoC, it showed that both the CPU and GPU had a TDP of 4W, making a theoretical SoC TDP of 8W. However when the GPU was being stressed by running a game, they ran a CPU benchmark in the background, the SoC quickly went up to 8W, but the CPU was quickly throttled from 1.7 GHz to just 800 Mhz as the system tried to keep everything at 4W or below, this explained why the Nexus 10 didn't benchmark as well as we wished.
Back to the 5450 which should beat the A6X, trouble is it has double the CPU & GPU cores of the 5250 and is clocked higher, even on a more advanced 28nm process, which will lower power consumption I feel that system will often be throttled because of power and maybe heat concerns, so it looks amazing on paper but may disappoint in reality, and a 5450 in smartphone is going to suffer even more.
So why does Apple have an advantage?, well basically money, for a start iSheep will pay more for their devices, so they afford to design a big SoC and big batteries that may not be profitable to other companies. Tegra 4 is listed as a 80mm2 chip, iPhone 5 is 96mm2 and A6X is 123mm2, Apple can pack more transistors and reap the GPU performance lead, also they chosen graphics supplier Imagination Technologies have excellent products, Power VR Rogue will only increase Apple's GPU lead. They now have their own chip design team, the benefit for them has been their Swift core is almost as powerful as ARM A15, but seems less power hungry, anyway Apple seems to be happy running slower CPUs compared to Android. Until an Android or WP8 or somebody can achieve Apple's margins they will be able to 'buy' their way to GPU domination, as an Android fan it makes me sad:crying:
Click to expand...
Click to collapse
Typical "isheep" reference, unnecessary.
Why does apple have the advantage? Maybe because there semiconductor team is talented and can tie the A6X+PowerVR GPU efficiently. NIVIDA should have focused more on GPU in my opinion as the CPU was already good enough. With these tablets pushing excess of 250+ppi the graphics processor will play a huge role. They put 72 cores in there processor. Excellent. Will the chip ever be optimized to full potential? No. So again they demonstrated a product that sounds good on paper but real world performance might be a different story.
MrPhilo said:
For goodness sake, this isn't final hardware, anything could change. Hung2900 knows nothing, what he stated isn't true. Samsung has licensed PowerVR, it isn't just stuck to Apple, just that Samsung prefers using ARMs GPU solution. Another thing I dislike is how everyone is comparing a GPU in the iPad 4 (SGX554MP4) that will NEVER arrive in a phone compared a Tegra 4 which will arrive in a phone. If you check OP link the benchmark was posted on the 3rd of January with different results (18fps then 33fps), so there is a chance it'll rival the iPad 4. I love Tegra as Nvidia is pushing developers to make more better games for Android compared to the 'geeks' *cough* who prefers benchmark results, whats the point of having a powerful GPU if the OEM isn't pushing developers to create enhance effect games for there chip.
Hamdir is correct about the GPUs, if Tegra 3 was around 50-80% faster than Tegra 2 with just 4 more cores, I can't really imagine it only being 2x faster than Tegra 3. Plus its a 28nm (at around 80mm2 just a bit bigger than Tegra 3, smaller than A6 90mm2) along with the dual memory than single on Tegra 2/3.
Click to expand...
Click to collapse
Firstly please keep it civil, don't go around saying that people know nothing, people's posts always speak volumes. Also calling people geeks, on XDA is that even an insult, next you're be asking what I deadlift:laugh:
My OP was done in the spirit of technical curiosity, and to counter the typical unrealistic expectations of a new product on mainstream sites, e.g. Nvidia will use Kepler tech (which was false), omg Kepler is like GTX 680, Tegra 4 will own the world, people forget that we are still talking about device that can only use a few watts, and must be passively cooled and not a 200+ watt, dual-fan GPU, even though they both now have to power similar resolutions, which is mental.
I both agree and disagree with your view on Nvidia's developer relationship, THD games do look nice, I compared Infinity Blade 2 on iOS vs Dead Trigger 2 on youtube, and Dead Trigger 2 just looked richer, more particle & physics effects, although IF Blade looked sharper at iPad 4 native resolution, one of the few titles to use the A6x's GPU fully.The downside to this relationship is the further fragmentation of the Android ecosystem, as Chainfire's app showed most of the extra effects can run on non Tegra devices.
Now, a 6 times increase in shader, does not automatically mean that games / benchmarks will scale in linear fashion, as other factors such as TMU /ROP throughput can bottleneck performance. Nvidia's Technical Marketing Manager, when interviewed at CES, said that the overall improvement in games / benchmarks will be around 3 to 4 times T3. Ultimately I hope to see Tegra 4 in a new Nexus 7, and if these benchmarks are proved accurate, it wouldn't stop me buying. Overall including the CPU, it would be a massive upgrade over the current N7, all in the space of a year.
At 50 seconds onwards.
https://www.youtube.com/watch?v=iC7A5AmTPi0
iOSecure said:
Typical "isheep" reference, unnecessary.
Why does apple have the advantage? Maybe because there semiconductor team is talented and can tie the A6X+PowerVR GPU efficiently. NIVIDA should have focused more on GPU in my opinion as the CPU was already good enough. With these tablets pushing excess of 250+ppi the graphics processor will play a huge role. They put 72 cores in there processor. Excellent. Will the chip ever be optimized to full potential? No. So again they demonstrated a product that sounds good on paper but real world performance might be a different story.
Click to expand...
Click to collapse
Sorry Steve, this is an Android forum, or where you too busy buffing the scratches out of your iPhone 5 to notice? I have full respect for the talents of Apple's engineers & marketing department, many of its users less so.
hamdir said:
tegra optimization porting no longer works using chainfire, this is now a myth
did u manage to try shadowgun thd, zombie driver or horn? the answer is no, games that use t3 sdk for physx and other cpu graphics works can not be forced to work on other devices, equally chainfire is now outdated and no longer updated
Click to expand...
Click to collapse
Looks like they haven't updated chain fire 3d for a while as a result only t3 games don't work but others do work rip tide gp, dead trigger etc . It's not a myth but it is outdated and only works with ics and tegra 2 compatible games . I think I (might be) unfortunate too but some gameloft games lagged on tegra device that i had, though root solved it too an extent
I am not saying something is superior to something just that my personal experience I might be wrong I may not be
Tbh I think benchmarks don't matter much unless you see some difference in real world usage and I had that problem with tegra in my experience
But we will have to see if the final version is able to push it above Mali t 604 and more importantly sgx544
Turbotab said:
Sorry Steve, this is an Android forum, or where you too busy buffing the scratches out of your iPhone 5 to notice? I have full respect for the talents of Apple's engineers & marketing department, many of its users less so.
Click to expand...
Click to collapse
No I actually own a nexus 4 and ipad mini so I'm pretty neutral in googles/apples ecosystems and not wiping any scratches off my devices.

[INFO]Processor 101

New processors come out everyday and you are like oh my god which one do I buy which one??
Well here the answer to all your processor related queries!!
Qualcomm Snapdragon​
Qualcomm continues to do what Qualcomm does best – produce a range of high quality chips with everything that handset manufactures need already built in. This time last quarter, we were taking our first look at the upcoming Snapdragon 600 processors which would be replacing the older S4 Pro, another incredibly popular Qualcomm processor.
Qualcomm doesn’t use the exact specification for the Cortex A15, it licenses the architecture from ARM which it then implements into its own Krait CPU cores, the newest version of which, the Krait 300, has shown up in the new Snapdragon 600 SoC...
Since then, a range of handsets powered by Qualcomm’s newest chips have appeared on the market, the flagship Samsung Galaxy S4 and HTC One being the two most notable models which are both some of the best performing smartphones on the market. Performance wise, the Snapdragon 600 has proven to be a decent enough jump up from the previous generation, performing well in most benchmark tests.
We’ve also started to hear about a few devices featuring the lower end Snapdragon 400 and 200 chips, with a range of entry level processors using various ARM architectures heading to the market in the near future. So far this year high end smartphones have received the biggest performance improvements, but these new chips should give the midrange a much needed boost later in the year.
So whilst Snapdragon 600 is certainly the most popular high-end chip on the market right now, we’ve already started to see our first snippets at Qualcomm’s next big thing, the Snapdragon 800.
Click to expand...
Click to collapse
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
There’s been lots of official and unofficial data floating around over the past few months regarding this new chip, and from, what we can tell, it looks to be one powerful piece of tech. Qualcomm demoed some of the new chip’s improved 3D performance earlier in the year, and more recently we’ve seen a few benchmarks popping up for new devices, which place the Snapdragon 800 at the top of the benchmark scores come it’s release.
First, there was the Pantech IM-A880 smartphone, which scored an impressive 30133 in the popular Antutu benchmark, followed by the rumoured beefed up version of the Galaxy S4, and most recently the new Xperia Z Ultra which pulled in the most impressive score yet, a whopping 32173. We’ve also seen some more official looking benchmarks from AnandTech and Engadget which confirm the Antutu scores of above and around 30,000, and also gives us a good look at how the chip performs in a range of other tests. The conclusion — it’s a bit of a beast.
These notable benchmarks scores are no doubt down to the new higher clocked Krait 400 CPU cores and the new Adreno 330 GPU, which is supposed to offer around a 50% performance improvement over the already quick Adreno 320. The test results we’ve seen have shown that the Snapdragon 800s CPU is fine compared with the current crop of processors, but the chip really shines through when it comes to GPU performance, which has proven to be even quicker than the Tegra 4 and iPad 4 chips.
We’ve already seen that Qualcomm is taking graphics extra seriously with its latest chip, as the Snapdragon 800 became the first processor to receive OpenGL ES 3 certification and is compliant with all the big graphics APIs.
Quite a few upcoming top of the line handsets are rumored to be utilizing Qualcomm’s latest processor, including the Galaxy S4 LTE-A, Oppo Find 7, and an Xperia Z refresh as well, so the Snapdragon 800 is perhaps the biggest chip to look out for in the coming months
Click to expand...
Click to collapse
Exynos 5 Octa​
Moving away from Qualcomm, there was certainly a lot of hype surrounding Samsung’s octo-core monster of a processor. Upon release, the chip mostly lived up to expectations — the Exynos version of the Galaxy S4 topped our performance charts and is currently the fastest handset on the market. The SoC is the first to utilize the new big.LITTLE architecture, with four new Cortex A15 cores to provide top of the line peak performance, and four older low power Cortex A7s to keep idle and low performance power consumption to a minimum.
The chip is certainly one of the best when it comes to peak performance, but it has had its share of troubled when it comes to balancing power consumption and performance. If you’re in the market for the fastest smartphone currently around, then the Galaxy S4 is the one to pick right now, providing that it’s available in your region. It has the fastest CPU currently on the market, and its PowerVR SGX544 tri-core GPU matches that of the latest iPad. But with the Snapdragon 800 just around the corner, there could soon be a new processor sitting on the performance throne.
Looking forward, it’s difficult to see the Exynos retaining its top spot for much longer. Other companies are starting to look beyond the power-hungry Cortex A15 architecture, but Samsung hasn’t yet unveiled any new plans.
Click to expand...
Click to collapse
Intel Clover Trail+ and Baytrail​
Speaking of which, perhaps the biggest mover this year has been Intel, and although the company still isn’t competing with ARM in terms of the number of design wins, Intel has finally show off some products which will pose a threat to ARM’s market dominance.
Although we’ve been hearing about Clover Trail+ since last year, the chip is now moving into full swing, with a few handsets arriving which are running the chip, and some of the benchmarks we’ve seen are really quite impressive. Clover Trail+ has managed to find the right balance between performance and power consumption, unlike previous Atom chips which been far too slow to keep up with the top of the line ARM-based processors.
Then there’s Baytrail. Back at Mobile World Congress earlier in the year, Intel laid out its plans for its Clover Trail+, but we’ve already heard information about the processor’s successor. Intel claims that its new Silvermont cores will further improve on both energy efficiency and peak performance. It sounds great on paper, but we always have to take these unveilings with a pinch of salt. What we are most likely looking at with Baytrail is a decent performance improvement, which should keep the processor ahead of the current Cortex A15 powered handsets in the benchmarks, but energy improvements are likely to come in the form of idle power consumption and low power states, rather than saving energy at the peak performance levels
Click to expand...
Click to collapse
But Intel isn’t just interested in breaking into the smartphone and tablet markets with its new line-up of processors. The company is still very much focused on producing chips for laptops. One particularly interesting prospect is the confirmed new generation of Android based netbooks and laptops powered by more robust Intel processors, which could give Microsoft a real run for their money.
Intel has clarified that it will also be assigning the additional Pentium and Celeron titles to its upcoming Silvermont architecture as well as using it in the new BayTrail mobile chips. What this potentially means is a further blurring of the line between tablets and laptops, where the same processor technology will be powering a range of Intel based products. I’m expecting the performance rankings to go from Baytrail for phones and tablets, to Celeron for notebooks, and Pentium chips for small laptops, but this naming strategy hasn’t been confirmed yet. It’s also interesting to see where this will stack up with Intel’s newly released Haswell architecture, which is also aimed at providing power efficient solutions to laptops.
Taking all that into consideration, Baytrail has the potential to be a big game changer for Intel, as it could stand out well ahead of Samsung’s top of the line Exynos chips and will certainly rival the upcoming Qualcomm Snapdragon 800 processor. But we’ll be waiting until the end of the year before we can finally see what the chip can do. In the meantime, we’ll look forward to seeing if Clover Trail+ can finally win over some market share.
Click to expand...
Click to collapse
Nvidia Tegra 4 and 4i​
Nvidia, on the other hand, has had a much more subdued second quarter of the year. We already had many of the unveilings for its new Tegra 4 and Tegra 4i designs by the start of the year, and so far, no products have launched which are making use of Nvidia’s latest chips.
But we have seen quite a bit about Nvidia Shield, which will be powered by the new Tegra 4 chip, and it certainly looks to be a decent piece of hardware. There have also been some benchmarks floating around suggesting that the Tegra 4 is going to significantly outpace other Cortex A15 powered chips, but, without a significant boost in clock speeds, I doubt that the chip will be much faster regarding most applications.
Nvidia’s real strength obviously lies in its graphics technology, and the Tegra 4 certainly has that in spades. Nvidia, much like Qualcomm, has focused on making its new graphics chip compatible with all the new APIs, like OpenGL ES 3.0 and DirectX 11, which will allow the chip to make use of improved graphical features when gaming. But it’s unclear as to whether that will be enough to win over manufactures or consumers.
The Tegra 4i has been similarly muted, without any handsets yet confirmed to be using the chip and we haven’t really heard much about performance either. We already know that the Tegra 4i certainly isn’t aiming to compete with top of the line chips, as it’s only the older Cortex A9s in its quad-core, but with other processors already offering LTE integration, it’s tough to see smartphone manufactures leaping at Nvidia’s chip.
The Tegra 4 is set for release at the end of this quarter, with the Tegra 4i following later in the year. But such a delayed launch may see Nvidia risk missing the boat on this generation of processors as well, which may have something to do with Nvidia’s biggest announcement so far this year – its plan to license its GPU architecture.
This change in direction has the potential to turn Nvidia into the ARM of the mobile GPU market, allowing competing SoC manufacturers, like Samsung and Qualcomm, to use Nvidia’s graphics technology in their own SoCs. However, this will place the company in direct competition with the Mali GPUs from ARM and PowerVR GPUs from Imagination, so Nvidia’s Kepler GPUs will have shine through the competition. But considering the problems that the company had persuading handset manufacturers to adopt its Tegra 3 SoCs, this seems like a more flexible and potentially very lucrative backup plan rather than spending more time and money producing its own chips.
Click to expand...
Click to collapse
MediaTek Quad-cores​
But it’s not just the big powerhouse chip manufactures that have been introducing some new tech. MediaTek, known for its cheap lower performance processors, has recently announced a new quad-core chip named the MT8125, which will be targeted for use in tablets.
The new processor is built from four in-order ARM Cortex A7 cores clocked at 1.5Ghz, meaning that it’s not going to be an absolute powerhouse when it comes to processing capabilities. The SoC will also be making use of a PowerVR 5ZT series graphics chip, which will give it sufficient grunt when it comes to media applications as well, with support for full HD 1080p video playback and recording, as well as some power when it comes to games.
MediaTek chip
A fair bit has changed in the mobile processor space since we last took a look at the market earlier in the year. Here’s a round-up of all the mobile processor news for the second quarter of the year.
MediaTek is also taking a leaf out of Qualcomm’s book by designing the SoC to be an all in one solution. It will come with built in WiFi, Bluetooth, GPS and FM ratio units, and will also be available in three versions, for built-in HSPA+, 2G, or WiFi only variants. This should make the chip an ideal candidate for emerging market devices, as well as budget products in the higher-end markets.
Despite the quad-core CPU and modern graphics chip, the MT8125 is still aimed at being a power efficient solution for midrange and more budget oriented products. But thanks to improvements in mobile technologies and the falling costs of older components, this chip will still have enough juice to power through the most commonly used applications.
Early last month, MediaTek also announced that it has been working on its own big.LITTLE architecture, similar to that found in the Samsung Exynos 5 Octa. But rather than being an eight core powerhouse, MediaTek’s chip will just be making use of four cores in total.
The chip will be known as the MT8135 and will be slightly more powerful that the budget quad-core MT8125, as it will be using two faster Cortex A15 cores. These power hungry units will be backed up by two low power Cortex A7 cores, so it’s virtually the same configuration as the Exynos 5 Octa but in a 2-by-2 layout (2 A15s and 2 A7s) rather than 4-by-4 (4 A15s and 4 A7s).
But in typical MediaTek fashion, the company has opted to down clock the processor in order to make the chip more energy efficient, which is probably a good thing considering that budget devices tend to ship with smaller batteries. The processor will peak at just 1Ghz, which isn’t super slow, but it is nearly half the speed of the A15s found in the Galaxy S4. But performance isn’t everything, and I’m more than happy to see a company pursue energy efficiency over clock speed and number of cores for once, especially if it brings big.LITTLE to some cheaper products.
Click to expand...
Click to collapse
Looking to the future​
ARM Cortex A57​
If you fancy a look even further ahead into the future, then we have also received a little bit of news regarding ARM’s successor to the A15, the all new Cortex A57. This new top of the line chip recently reached the “tape out” stage of development, but it’s still a way off from being released in any mobile products.
Cortex A50 performance chart
The Cortex A50 series is set to offer a significant performance improvement. Hopefully the big.LITTLE architecture will help balance out the power consumption.
ARM has hinted that its new chip can offer up to triple the performance of the current top of the line Cortex-A15 for the same amount of battery consumption. The new Cortex-A57 will also supposedly offer five times the amount of battery life when running at the same speed as its current chips, which sounds ridiculously impressive.
We heard a while back that AMD was working on a Cortex A57/A53 big.LITTLE processor chip as well, which should offer an even better balance of performance and energy efficiency than the current Exynos 5 Octa. But we’ll probably be waiting until sometime in 2014 before we can get our hands on these chips.​
The age of x64​
Speaking of ARM’s next line-up of processors, another important feature to pay attention to will be the inclusion of 64 bit processing technology and the new ARMv8 architecture. ARM’s new Cortex-A50 processor series will take advantage of 64 bit processing in order to improve the performance in more demanding scenarios, reduce power consumption, and take advantage of larger memory addresses for improved performance.
We’ve already seen a few mobile memory manufactures talk about production of high speed 4GB RAM chips, which can only be made use of with larger 64 bit memory addresses. With tablets and smartphones both in pursuit of ever higher levels of performance, x64 supported processors seem like a logical step.
So there you have it, I think that’s pretty much all of the big processor news over the past 3 months. Is there anything in particularly which has caught your eye, are you holding out for a device with a brand new SoC, or are the current crop of processors already plenty good enough for your mobile needs?
Click to expand...
Click to collapse
Reserved
Great thread, Again.:good:
This is better suited for the general General forum. But good job anyway.
Good job, mate!
Nicely written. I enjoyed reading that.
Sent from my GT-I9500 using Tapatalk 4 Beta
Well done. Good read :thumbup:
TEAM MiK
MikROMs Since 3/13/11

Categories

Resources