The world's third-largest Smartphone manufacturer Huawei has travelled a long way to develop their own powerful chipset and here comes the latest SoC (System on Chip) - Kirin 650. Kirin 650 makes a new breakthrough on enhanced battery life and performance. Kirin 650 is the direct successor to the Kirin 620 chipset, which is mainly used for low-budget Smartphone. This Kirin 650 chipset is powerful enough to compete with other chipset of the same range.
Kirin 650 Specifications
Huawei Kirin 650 chipset is integrated with an octa-core CPU, 4x Cortex-A53 cores clocked at 1.7GHz and 4x Cortex-A53 cores clocked at 2GHz (octa-core A53 (4 × 1.7GHz + 4 × 2.0GHz) ) big. LITTLE architecture and ARM’s Mali T830 GPU and it make new breakthroughs in prolonging battery life. In addition, Huawei Kirin 650 comes with Dual SIM LTE Cat.7 (300Mbit/s) technology, 16nm technology, and i5 processor.
Kirin 650: Significantly improved performance
Kirin 650 is based on 16nm FinFET plus process technology of TSMC. 16nm FinFET plus is an enhanced version of FinFET technology provides substantial power reduction with high-speed performance. 16nm refers to the size of the transistors used in the chipset. From 22nm to 16nm the size of the transistor has been reduced to improve the performance of chipset. In 16nm, the transistor fins are tightly paced, thinner and taller when compared to 22nm.
When the distance between transistors (transistor fins) reduces, it requires less power to activate them and can accommodate more transistors on the same circuit. Thus, the chip delivers better performance, improves the battery life and makes the chipset more powerful.
Compared with the previous generation Huawei Kirin 620, Kirin 650 improves its CPU performance by 60% and GPU performance by 100%. 16nm FinFET plus process technology improves the performance by 65% and reduces power consumption by 70% when compared to 28nm (28nm is used in Snapdragon 616 processor) process and it Offers extra 40% higher speed and consumes 50% less power when compared to 20nm System on chip (20SoC) technology.
Kirin 650: Faster Network
Kirin 650 supports the three major networks 2G/ 3G/ 4G across the world. It not only supports TD-LTE / FDD-LTE / TD-SDCMA / WCDMA / GSM but also CDMA networks. It is equipped with Cat.7, which is capable of downloading 300Mbits/s.
In addition, Kirin 650 offers Dual standby Dual SIM, 4G LTE network and Voice over LTE (VoLTE) support. VoLTE reduces the call delay and improves the internet connection speed. One advantage of VoLTE is that the consumers can experience an improved high-definition call quality with 4G network (can transfer more data over 4G than 2G/3G).
Kirin 650: Security
Kirin 650 integrates HiSEE security solutions that ensure user security, handle fingerprint information, secure mobile payment and voice encryption (call encryption). All Fingerprint data gets saved into ARM Trust Zone environment in an encrypted format. Even if the phone is ROOTING or violent dismantled, others cannot get fingerprint data.
Kirin 650: GPU
In terms of GPU, Kirin 650 is using ARM’s Mali T830 GPU. When compared to Kirin 620, GPU performance improved by 100%. Kirin 650 graphics is capable of delivering stunning visuals effects and a smooth gaming experience.
Kirin 650: Intelligent processor
To improve the speed of the Smartphone the Huawei Kirin 650 comes with i5 processor. The i5 processor is capable of generating a next level high-end Smartphone with smooth streaming, sharp visuals, and improved internet browsing speed. The i5 processor coordinates and shares resources with the octa core in the CPU that helps to improve the speed and smoothness of the Smartphone. Moreover, it consumes low power and is capable of handling tasks more efficiently.
Conclusion
While buying a Smartphone you should be more aware of System on chip (SoC). Now with all new technology the Huawei has introduced their new smart chipset – Kirin 650. So, with 16nm process technology, octa core A53 (4 × 1.7GHz + 4 × 2.0GHz) architecture, Mali880MP2 GPU, 4G LTE Cat.7 technology and i5 processor the Kirin 650 showcases the best performance for the next generation high-end budget friendly Smartphone.
But the Customer support team of Huawei India said, Honor 5c chipset doesnot support VoLTE.. Lol they don't even know whats inside
Related
CPU performance from the new TI OMAP 3640 (yes, they’re wrong again, its 3640 for the 1 GHz SoC, 3630 is the 720 MHz one) is surprisingly good on Quadrant, the benchmarking tool that Taylor is using. In fact, as you can see from the Shadow benchmarks in the first article, it is shown outperforming the Galaxy S, which initially led me to believe that it was running Android 2.2 (which you may know can easily triple CPU performance). However, I’ve been assured that this is not the case, and the 3rd article seems to indicate as such, given that those benchmarks were obtained using a Droid 2 running 2.1.
Now, the OMAP 3600 series is simply a 45 nm version of the 3400 series we see in the original Droid, upclocked accordingly due to the reduced heat and improved efficiency of the smaller feature size.
If you need convincing, see TI’s own documentation: http://focus.ti.com/pdfs/wtbu/omap3_pb_swpt024b.pdf
So essentially the OMAP 3640 is the same CPU as what is contained in the original Droid but clocked up to 1 GHz. Why then is it benchmarking nearly twice as fast clock-for-clock (resulting in a nearly 4x improvement), even when still running 2.1? My guess is that the answer lies in memory bandwidth, and that evidence exists within some of the results from the graphics benchmarks.
We can see from the 3rd article that the Droid 2’s GPU performs almost twice as fast as the one in the original Droid. We know that the GPU in both devices are the same model, a PowerVR SGX 530, except that the Droid 2’s SGX 530 is, as is the rest of the SoC, on the 45 nm feature size. This means that it can be clocked considerably faster. It would be easy to assume that this is reason for the doubled performance, but that’s not necessarily the case. The original Droid’s SGX 530 runs at 110 MHz, substantially less than its standard clock speed of 200 MHz. This downclocking is likely due to the memory bandwidth limitations I discussed in my Hummingbird vs Snapdragon article, where the Droid original was running LPDDR1 memory at a fairly low bandwidth that didn’t allow for the GPU to function at stock speed. If those limitations were removed by adding LPDDR2 memory, the GPU could then be upclocked again (likely to around 200 MHz) to draw even with the new memory bandwidth limit, which is probably just about twice what it was with LPDDR1.
So what does this have to do with CPU performance? Well, it’s possible that the CPU was also being limited by LPDDR1 memory, and that the 65 nm Snapdragons that are also tied down to LPDDR1 memory share the same problem. The faster LPDDR2 memory could allow for much faster performance.
Lastly, since we know from the second article at the top that the Galaxy S performs so well with its GPU, why is it lacking in CPU performance, only barely edging past the 1 GHz Snapdragon?
It could be that the answer lies in the secret that Samsung is using to achieve those ridiculously fast GPU speeds. Even with LPDDR2 memory, I can’t see any way that the GPU could achieve 90 Mtps; the required memory bandwidth is too high. One possibility is the addition of a dedicated high-speed GPU memory cache, allowing the GPU access to memory tailored to handle its high-bandwidth needs. With this solution to memory bandwidth issues, Samsung may have decided that higher speed memory was unnecessary, and stuck with a slower solution that remains limited in the same manner as the current-gen Snapdragon.
Lets recap: TI probably dealt with the limitations to its GPU by dropping in higher speed system RAM, thus boosting overall system bandwidth to nearly double GPU and CPU performance together.
Samsung may have dealt with limitations to the GPU by adding dedicated video memory that boosted GPU performance several times, but leaving CPU performance unaffected.
This, I think, is the best explanation to what I’ve seen so far. It’s very possible that I’m entirely wrong and something else is at play here, but that’s what I’ve got.
Click to expand...
Click to collapse
CPU Performance
Before I go into details on the Cortex-A8, Snapdragon, Hummingbird, and Cortex-A9, I should probably briefly explain how some ARM SoC manufacturers take different paths when developing their own products. ARM is the company that owns licenses for the technology behind all of these SoCs. They offer manufacturers a license to an ARM instruction set that a processor can use, and they also offer a license to a specific CPU architecture.
Most manufacturers will purchase the CPU architecture license, design a SoC around it, and modify it to fit their own needs or goals. T.I. and Samsung are examples of these; the S5PC100 (in the iPhone 3GS) as well as the OMAP3430 (in the Droid) and even the Hummingbird S5PC110 in the Samsung Galaxy S are all SoCs with Cortex-A8 cores that have been tweaked (or “hardened”) for performance gains to be competitive in one way or another. Companies like Qualcomm however will build their own custom processor architecture around a license to an instruction set that they’ve chosen to purchase from ARM. This is what the Snapdragon’s Scorpion processor is, a completely custom implementation that shares some similarities with Cortex-A8 and uses the same ARMv7 instruction set, but breaks away from some of the limitations that the Cortex-A8 may impose.
Qualcomm’s approach is significantly more costly and time consuming, but has the potential to create a processor that outperforms the competition. Through its own custom architecture configuration, (which Qualcomm understandably does not go into much detail regarding), the Scorpion CPU inside the Snapdragon SoC gains an approximate 5% improvement in instructions per clock cycle over an ARM Cortex-A8. Qualcomm appeals to manufacturers as well by integrating features such as GPS and cell network support into the SoC to reduce the need of a cell phone manufacturer having to add additional hardware onto the phone. This allows for a more compact phone design, or room for additional features, which is always an attractive option. Upcoming Snapdragon SoCs such as the QSD8672 will allow for dual-core processors (not supported by Cortex-A8 architecture) to boost processing power as well as providing further ability to scale performance appropriately to meet power needs. Qualcomm claims that we’ll see these chips in the latter half of 2010, and rumor has it that we’ll begin seeing them show up first in Windows Mobile 7 Series phones in the Fall. Before then, we may see a 45 nm version of the QSD8650 dubbed “QSD8650A” released in the Summer, running at 1.3 GHz.
You might think that the Hummingbird doesn’t stand a chance against Qualcomm’s custom-built monster, but Samsung isn’t prepared to throw in the towel. In response to Snapdragon, they hired Intrinsity, a semiconductor company specializing in tweaking processor logic design, to customize the Cortex-A8 in the Hummingbird to perform certain binary functions using significantly less instructions than normal. Samsung estimates that 20% of the Hummingbird’s functions are affected, and of those, on average 25-50% less instructions are needed to complete each task. Overall, the processor can perform tasks 5-10% more quickly while handling the same 2 instructions per clock cycle as an unmodified ARM Cortex-A8 processor, and Samsung states it outperforms all other processors on the market (a statement seemingly aimed at Qualcomm). Many speculate that it’s likely that the S5PC110 CPU in the Hummingbird will be in the iPhone HD, and that its sister chip, the S5PV210, is inside the Apple A4 that powers the iPad. (UPDATE: Indications are that the model # of the SoC in the Apple iPad’s A4 is “S5L8930”, a Samsung part # that is very likely closely related to the S5PV210 and Hummingbird. I report and speculate upon this here.)
Lastly, we really should touch upon Cortex-A9. It is ARM’s next-generation processor architecture that continues to work on top of the tried-and-true ARMv7 instruction set. Cortex-A9 stresses production on the 45 nm scale as well as supporting multiple processing cores for processing power and efficiency. Changes in core architecture also allow a 25% improvement in instructions that can be handled per clock cycle, meaning a 1 GHz Cortex-A9 will perform considerably quicker than a 1 GHz Cortex-A8 (or even Snapdragon) equivalent. Other architecture improvements such as support for out-of-order instruction handling (which, it should be pointed out, the Snapdragon partially supports) will allow the processor to have significant gains in performance per clock cycle by allowing the processor to prioritize calculations based upon the availability of data. T.I. has predicted its Cortex-A9 OMAP4440 to hit the market in late 2010 or early 2011, and promises us that their OMAP4 series will offer dramatic improvements over any Cortex-A8-based designs available today.
GPU performance
There are a couple problems with comparing GPU performance that some recent popular articles have neglected to address. (Yes, that’s you, AndroidAndMe.com, and I won’t even go into a rant about bad data). The drivers running the GPU, the OS platform it’s running on, memory bandwidth limitations as well as the software itself can all play into how well a GPU runs on a device. In short: you could take identical GPUs, place them in different phones, clock them at the same speeds, and see significantly different performance between them.
For example, let’s take a look at the iPhone 3GS. It’s commonly rumored to contain a PowerVR SGX 535, which is capable of processing 28 million triangles per second (Mt/s). There’s a driver file on the phone that contains “SGX535” in the filename, but that shouldn’t be taken as proof as to what it actually contains. In fact, GLBenchmark.com shows the iPhone 3GS putting out approximately 7 Mt/s in its graphics benchmarks. This initially led me to believe that the iPhone 3GS actually contained a PowerVR SGX 520 @ 200 MHz (which incidentally can output 7 Mt/s) or alternatively a PowerVR SGX 530 @ 100 MHz because the SGX 530 has 2 rendering pipelines instead of the 1 in the SGX 520, and tends to perform about twice as well. Now, interestingly enough, Samsung S5PC100 documentation shows the 3D engine as being able to put out 10 Mt/s, which seemed to support my theory that the device does not contain an SGX 535.
However, the GPU model and clock speed aren’t the only limiting factors when it comes to GPU performance. The SGX 535 for example can only put out its 28 Mt/s when used in conjunction with a device that supports the full 4.2 GB per second of memory bandwidth it needs to operate at this speed. Assume that the iPhone 3GS uses single-channel LPDDR1 memory operating at 200 MHz on a 32-bit bus (which is fairly likely). This allows for 1.6 GB/s of memory bandwidth, which is approximately 38% of what the SGX 535 needs to operate at its peak speed. Interestingly enough, 38% of 28 Mt/s equals just over 10 Mt/s… supporting Samsung’s claim (with real-world performance at 7 Mt/s being quite reasonable). While it still isn’t proof that the iPhone 3GS uses an SGX 535, it does demonstrate just how limiting single-channel memory (particularly slower memory like LPDDR1) can be and shows that the GPU in the iPhone 3GS is likely a powerful device that cannot be used to its full potential. The GPU in the Droid likely has the same memory bandwidth issues, and the SGX 530 in the OMAP3430 appears to be down-clocked to stay within those limitations.
But let’s move on to what’s really important; the graphics processing power of the Hummingbird in the Samsung Galaxy S versus the Snapdragon in the EVO 4G. It’s quickly apparent that Samsung is claiming performance approximately 4x greater than the 22 Mt/s the Snapdragon QSD8650’s can manage. It’s been rumored that the Hummingbird contains a PowerVR SGX 540, but at 200 MHz the SGX 540 puts out 28 Mt/s, approximately 1/3 of the 90 Mt/s that Samsung is claiming. Either Samsung has decided to clock an SGX 540 at 600 MHz, which seems rather high given reports that the chip is capable of speeds of “400 MHz+” or they’ve chosen to include a multi-core PowerVR SGX XT solution. Essentially this would allow 3 PowerVR cores (or 2 up-clocked ones) to hit the 90 Mt/s mark without having to push the GPU past 400 MHz.
Unfortunately however, this brings us right back to the memory bandwidth limitation argument again, because while the Hummingbird likely uses LPDDR2 memory, it still only appears to have single-channel memory controller support (capping memory bandwidth off at 4.2 GB/s), and the question is raised as to how the PowerVR GPU obtains the large amount of memory bandwidth it needs to draw and texture polygons at those high speeds. If the PowerVR SGX 540 (which, like the SGX 535 performs at 28 Mt/s at 200 MHz) requires 4.2 GB/s of memory bandwidth, drawing 90 Mt/s would require over 12.6 GB/s of memory bandwidth, 3 times what is available. Samsung may be citing purely theoretical numbers or using another solution such as possibly increasing GPU cache sizes. This would allow for higher peak speeds, but it’s questionable if it could achieve sustainable 90 Mt/s performance.
Qualcomm differentiates itself from most of the competition (once again) by using its own graphics processing solution. The company bought AMD’s Imageon mobile-graphics division in 2008, and used AMD’s Imageon Z430 (now rebranded Adreno 200) to power the graphics in the 65 nm Snapdragons. The 45 nm QSD8650A will include an Adreno 205, which will provide some performance enhancements to 2D graphics processing as well as hardware support for Adobe Flash. It is speculated that the dual-core Snapdragons will utilize the significantly more powerful Imageon Z460 (or Adreno 220), which apparently rivals the graphics processing performance of high-end mobile gaming systems such as the Sony PlayStation Portable. Qualcomm is claiming nearly the same performance (80 Mt/s) as the Samsung Hummingbird in its upcoming 45 nm dual-core QSD8672, and while LPDDR2 support and a dual-channel memory controller are likely, it seems pretty apparent that, like Samsung, something else must be at play for them to achieve those claims.
While Samsung and Qualcomm tend to stay relatively quiet about how they achieve their graphics performance, T.I. has come out and specifically stated that its upcoming OMAP4440 SoC supports both LPDDR2 and a dual-channel memory controller paired with a PowerVR SGX 540 chip to provide “up to 2x” the performance of its OMAP3 line. This is a reasonable claim assuming the SGX 540 is clocked to 400 MHz and requires a bandwidth of 8.5 GB/s which can be achieved using LPDDR2 at 533 MHz in conjunction with the dual-channel controller. This comparatively docile graphics performance may be due to T.I’s rather straightforward approach to the ARM Cortex-A9 configuration.
Power Efficiency
Moving onward, it’s also easily noticeable that the next generation chipsets on the 45 nm scale are going to be a significant improvement in terms of performance and power efficiency. The Hummingbird in the Samsung Galaxy S demonstrates this potential, but unfortunately we still lack the power consumption numbers we really need to understand how well it stacks up against the 65 nm Snapdragon in the EVO 4G. It can be safely assumed that the Galaxy S will have overall better battery life than the EVO 4G given the lower power requirements of the 45 nm chip, the more power-efficient Super AMOLED display, as well as the fact that both phones sport equal-capacity 1500mA batteries. However it should be noted that the upcoming 45 nm dual-core Snapdragon is claimed to be coming with a 30% decrease in power needs, which would allow the 1.5 GHz SoC to run at nearly the same power draw of the current 1 GHz Snapdragon. Cortex-A9 also boasts numerous improvements in efficiency, claiming power consumption numbers nearly half that of the Cortex-A8, as well as the ability to use multiple-core technology to scale processing power in accordance with energy limitations.
While it’s almost universally agreed that power efficiency is a priority for these processors, many criticize the amount of processing power these new chips are bringing to mobile devices, and ask why so much performance is necessary. Whether or not mobile applications actually need this much power is not really the concern however; improved processing and graphics performance with little to no additional increase in energy needs will allow future phones to actually be much more efficient in terms of power. This is because ultimately, power efficiency relies in a big part on the ability of the hardware in the phone to complete a task quickly and return to an idle state where it consumes very little power. This “burst” processing, while consuming fairly high amounts of power for very short periods of time, tends to be more economical than prolonged, slower processing. So as long as ARM chipset manufacturers can continue to crank up the performance while keeping power requirements low, there’s nothing but gains to be had.
Click to expand...
Click to collapse
http://alienbabeltech.com/main/?p=19309
http://alienbabeltech.com/main/?p=17125
its a good read for noobs like me, also read the comments as there is lots of constructive criticism [that actually adds to the information in the article]
Kind of wild to come across people quoting me when I'm just Googling the web for more info.
I'd just like to point out that I was probably wrong on the entire first part about the 3640. I can't post links yet, but Google "Android phones benchmarked; it's official, the Galaxy S is the fastest." for my blog article on why.
And the reason I'm out here poking around for more information is because AnandTech.com (well known for their accurate and detailed articles) just repeatedly described the SoC in the Droid X as a OMAP 3630 instead of the 3640.
EDIT - I've just found a blog on TI's website that calls it a 3630. I guess that's that! I need to find a TI engineer to make friends with for some inside info.
Anyhow, thanks for linking my work!
Make no mistake, OMAP 3xxx series get left in the dust by the Hummingbird.
Also, I wouldn't really say that Samsung hired Intrinsity to make the CPU - they worked together. Intrinsity is owned by Apple, the Hummingbird is the same core as the A4, but with a faster graphics processor - the PowerVR SGX 540.
There was a bug in the Galaxy S unit they tested, which was later confirmed in the authors own comments later on.
Looking back, when I switch phones it is usually when there is a better device out with a significant improvement over my current device. My first smartphone was the Tmobile MDA (HTC Wizard), which I bought roughly 5 years ago. The next phone was the Tmobile Wing (HTC Atlas), with a much smaller form factor and faster CPU the device was a great improvement.
My next device was my first real HTC marketed phone, the Touch Diamond. The diamond, was a complete overhaul from the other two HTC phones I used. I loved every little part of it. But going from the Diamond to the glamorous HD2 was even more amazing, the screen, the size everything was perfect.
Now the question I have is that it is almost a year that the HD2 has been out and I ready to get a new phone, but I am wondering about what things I should consider.
I dont think that the Droid X, or the Galaxy S smart phones are really all that much better than the HD2, so I am more interested in the Cortex-A9 phones that are slowly trickling into the market.
The CPUs that will have Cortex-A9 dual core tech are as follows:
Nvidia
Tegra 2
1Ghz
Custom High Profile Graphics
(Motorola Olympus, LG Star)
Qaulcomm
Snapdragon 3rd Gen
1.2GHz/1.5GHz
Adreno 220
Verizon HTC Phone
Samsung
Orion
1GHz
Mali 400
(Nexus S)
Texas Instruments
OMAP 4
1GHz+
PowerVR SGX 540
(Pandaboard)
Marvell
Armada 628
1.5GHz + Custom 624MHz DSP
Custom High Profile Graphics
ST-Erricson
U8500
1.2GHz
Mali 400
So basically what should I do? Wait for all of them to come out and then decide, or get which one comes first.
I want the best processing power with the greatest graphics, and was thinking on Tegra 2 but found that Open GL ES benchmarks have low values for the Tergra2 platform lower than the SGX 540.
Galaxy Tab Results:
http://www.glbenchmark.com/phonedetails.jsp?D=Samsung GT-P1000 Galaxy Tab&benchmark=glpro11
Folio 100:
http://www.glbenchmark.com/phonedetails.jsp?D=Toshiba Folio 100&benchmark=glpro11
Are these a result of poor drivers or is Tegra really weaker than the SGX 540, (and thus weaker than the Mali 400)?????
Is the Nexus S a better choice than the Motorola Olympus, or should I wait for HTC's addition to the game with a 3rd gen Snappy. Will the adreno 220 GPU out power the Tegra 2 and Mali 400. What do you guys think, and what do you plan on doing.
Well firstly better hardware means nothing if the software is the bottleneck. Secondly, we've seen often the grunt of the cpu is more contributive to performance of programs than the gpu in Android OS. Thirdly, you're going to have to wait, see, buy, test these platforms to know which ones are superior... but here is what I've discovered during the course of 2010.
SoC's for 2011:
(listed in what I believe is the best to the worse)
+ ARM Sparrow: Dual-core Cortex A9 @2.00GHz (on 32nm die), unspecified GPU
+ TI OMAP 4440: Dual-core Cortex A9 @1.5GHz, SGX 540 (90M t/s)
+ Apple A5 (iPad2): Dual-core Cortex A9 @0.9GHz, SGX 543MP2 (130M-150M t/s)
+ Qualcomm MSM8660 (Gen IV Snapdragon): Dual-core Cortex A9 @1.5GHz, Adreno 220 (88M t/s)
+ TI OMAP 4430: Dual-core Cortex A9 @1GHz, SGX 540 (90M t/s)
+ ST-Ericson U8500: Dual-core Cortex A9 @1.2GHz, ARM Mali 400 (50-80M t/s)
+ Samsung Orion: Dual-core Cortex A9 @1GHz, ARM Mali 400 (50-80M t/s)
+ Nvidia Tegra 2: Dual-core Cortex A9 @1GHz, nVidia ULP-GeForce (71M t/s)
+ Qualcomm Scorpion (Gen III Snapdragon): Dual-core Cortex A8 @1.2GHz, Adreno 220 (88M t/s)
Notes: The SGX530 is roughly half the speed as the SGX535. The SGX540 is twice as fast as the SGX535. The Adreno 205(41M tri/sec) is supposedly faster than the SGX535 but slower than the SGX540 (thus, is likely to be in the mid). The Adreno 220 is twice the speed of the Adreno 205 but it is slightly slower than SGX540(88M vs 90M tri/sec). Samsung claims ARM Mali 400 to be 5 times faster than its previous GPU (S3C6410 - 4M tri/sec), about on par (80M tri/sec) with the Adreno 220, but few leaks benchmarked it to be only slighlty faster than the SGX535 (40M tri/sec). The gpu used in the Nvidia Tegra2 has been quite contained (little known). I estimated the Tegra2 has 71M t/sec (Tegra 2 Neocore=27fps/55fps=Galaxy S Neocore, x62% disadvantage of screen resolution, x 90Mt/s of SGX540 = 71M t/s). And recently some inside rumors via fudzilla actually confirmed this exact figure, so therefore the gpu-chip inside the Tegra2 is roughly equivalent to the MALI 400.
All of these details are based on officially announced, rumors from trustworthy sources and logical estimations, so discrepancies can be existent.
Last thoughts: As you can see there is some diversity in the next-gen chips (soon to-be current-gen), where the top tier (OMAP 4440) is roughly 1.5 times more powerful than the low tier (Tegra 2). However drivers and software will play a lead-role in determining which device could squeeze out the most performance. And this factor may alone favour the iPad2, Playbook or even MeeGo tablets to be better than the Honeycomb tablets which are somewhat bottleneck-ed by the lack of hardware accelaration and post-transcription through the Dalvik VM. I think we've hit the point where we could have some really impressive high definition entertainment, and even emulating the Dreamcast at decent/fullspeed.
edit2: Well, Apple's been boasting over x9 the graphical performance over the original iPad. There are 2 articles on anadtech, one in Geekbench and a processor-specific details from imgtech (I dug up from 12months ago). It has been found that its a modified Cortex A9, 512MB RAM and the SGX543MP2. Everything points to the SGX543MP2 being significantly faster than the SGX540, and the given number was 133 Million Polygons per second (theoretical) for SGX543MP4 which is double SGX543MP2 performance. The practical figure is always less. Imgtech said the SGX540 is double the grunt of the SGX535, benchmarks show the SGX543MP2 is (on average) five times the grunt as the iPad (SGX535). So going by imgtech (the designer of sgx chips), the theoretical value that I list above, should be 70M t/s ... going by Apple's claim it should be 200M t/s ... going by benchmarks it should be roughly 130 M t/s. Imgtech's value is definently wrong since they claimed its faster than the SGX540 valued at 90M t/s. Apple's claim also seems biased, they take only the best possible conditions and exaggerate it even more. It seems to be somewhere in between, and wouldn't you know it, the average of the two "false" claims is equivalent to the benchmarked value
edit3: The benchmarks are out for the 4th-gen QSD, which confirms everything prior. It's competing for top place against the 4440 and A5. I've changed the post (only updated chip's name).
If one were to choose between the processor of the A5 and the OMAP4440, they'd be really pressed to choose between more cpu grunt or more gpu grunt.
Just re-edited the post.
Apple's A5 details are added in, its looks to be one of the best chips for the year.
If I had to choose between the OMAP4440 and A5, I probably would be reduced to a head-tail coin flip!
Update:
The benchmark results of the Snapdragon MSM8660 are in.... and it goes further to support the list.
MSM660 = Dualcore A9 + Adreno 220 + Qualcomm modification (for better/worse).
1. Adreno
Adreno series ATI which is made, or used to be called ATI imageon series, circa 2002-2004 is at the beginning of the release of this GPU series. In 2008, AMD imageon sold to one of the leading manufacturers of processors, namely Quallcom. And now ATI / AMD only supportsthe architecture and development only. Now Adreno series is inherited from all the SOC (System On Chip) made Quallcom.
2. PowerVR
PowerVR Series is the first artificial logic video also enliven the VGA market, but as the dominance of NVIDIA and ATI video logic now only play in the world of mobile gadgets GPU. PowerVR itself is not in production in finished form by power logic, but they are only a draft architecture, which sold thelicense to many of the leading processor manufacturers such asNEC, Intel, Freescale, Texas Instruments and others.
PowerVR Series now in its sixth series, the second ever used in its game console in the 1900s, the Dream cast, and Sega Saturn. PowerVR SGX Series 5 is the series most often found on smartphones, SGX GPU 5 is an elite in the world of smartphones, the world might belike BMW cars.
3. Mali
Mali series, this GPU is made in the ARM architecture, though still rarely heard his name, but its power can not be underestimated. Mali GPU series outstanding from HDTVs, gamingconsoles (PS3), up to a smartphone. Especially for smartphone, the series used is 400MP4 Mali (MP is the core indicators used). GPU is part of the SOC A9 1.2GHz Exynos dual-core CPU Samsung's Galaxy SII. Reportedly 400MP4 Mali is ableto render almost equivalent to the PS3 and Xbox 360.
4. GeForce ULP
Series GeForce ULP (Ultra Low Power) are concentrated in that part of the GPU Tegra 2 SOC manufactured by NVIDIA. GeforceULP uses quadcore 4 pixel shaders + 4 vertex shaders up to a total of 8 cores that are in it.
If for determining the performance of course can not be separated from what SOC is used, it is very difficult to determine the point which is used for comparison because each GPU is highly dependent on the performance and support of its SOC. For instance, SOC OMAP 4 series with SGX540 GPU vs quadcore Tegra 2 ULP GeForce GPU with 8core, hello who wouldwin? When to see the number of cores, by naked eyes 8core candidate who will be on the GeForce ULP but when calculating SOC capabilities then look OMAP 4 was able to bulldozeTegra 2, not only from the benchmark results, framerate, javascript rendering, but also within a matter of efficiency in the use battere .
This is not surprising because the OMAP 4 has a few secret weapons like supports dual channel DDR2 memory up to 1GB LP, where the new Tegra 2 is capable of using a single channel.Back again SOC capabilities greatly determine the outcome, as well as Snapdragon with its core scorpion, would be defeated perform with Tegra 2? Not really, mainly for multimedia results which force snapdragon and Adreno indeed in optimizingon this side.
i think andreno still the best one
andreno or mali :fingers-crossed:
guys I would like to go with Mali
If we're going to talk about actual numbers for actual GPUs that you can buy then this is what's available:
Adreno 320 (Snapdragon 600), Geforce ULP (Tegra3), PowerVR SGX 544MP3 (Exynos 5 quad), Mali T604 (Exynos 5 dual).
So far from what we know of 3DMark, GLBenchmark and some other tests, the approximate order of performance from best to worse is:
PowerVR, Adreno, Mali, Geforce.
This WILL change because of newer versions coming out (Tegra4 for example). For now though, I'd consider Adreno and PowerVR to be ahead, PowerVR for sheer performance and Adreno for a good balance between power, performance and die size.
ill go for mali
I think JXD S7800b is the best mobile gaming devices by far, it is a quad core high definition handheld gaming. It carry RK3188 high-performance quad-core processors, 28 nm process technology, save electricity 60% than 45 nm, the peak frequency of 1.6 GHz, faster 33% than 1.2 GHz, superior performance.
Huawei is a leading telecommunication equipment manufacturer in the world. The company is quite innovative and makes telecommunication equipment like Smartphone, tablets, smart watch, modem etc. Now, Huawei has developed a new System on Chip (SoC) to improve the speed and performance of the Smartphone.
What is Android Smartphone SoC?
System on chip or SoC is an integrated circuit that combines or integrates all components into a single chip. SoC is an integral part of Smartphone which is in-house with CPU, GPU, memory technology, etc. As all components are integrated on the small chip, the performance of the Smartphone fully depends on the SoC.
Different manufacturers develop various SoC with different features (not completely similar). Now have a look at two new popular SoCs - Kirin 650 and Snapdragon 616.
Kirin 650 Specifications
Huawei Kirin 650 chipset is integrated with an octa-core CPU, 4x Cortex-A53 cores clocked at 1.7GHz and 4x Cortex-A53 cores clocked at 2GHz (octa-core A53 (4 × 1.7GHz + 4 × 2.0GHz) ) big.LITTLE architecture and ARM’s Mali T830 GPU, Dual SIM LTE Cat.7 (300Mbit/s) technology, 16nm FinFET Plus technology, and i5 processor.
Snapdragon 616 Specifications
Currently, Snapdragon 616 is one of the most popular SoC on the market which is mainly used in low to mid range phones. It comes with octa-core CPU, Octa-core Cortex A53 (4x1.7 GHz + 4x1.2 GHz), Qualcomm Adreno 405 GPU, 4G LTE Cat.4 and 28nm LP technology.
Kirin 650 Vs Snapdragon 616
Core count and clock speed
Both Kirin 650 and Snapdragon 616 comes with eight cores and the core is divided into two clusters. The first cluster of Kirin 650 is Cortex A53 that usually clocked at 2.0 GHz and the second cluster clocked at 1.7 GHz. When comes to Snapdragon 616, the first cluster of Cortex A53 clocked at 1.7 GHz and remaining set of four cores (second cluster) clocked at 1.2GHz. With this data, it is clear that Kirin 650 has more clock speed than Snapdragon 616. The clock speed implies how much data flows per second. That means Kirin 650 is faster and smarter than Snapdragon 616.
Process Technology
When compared with Snapdragon 616’s 28nm LP technology, Kirin 650’s 16nm FinFET Plus technology is two generations ahead due to its greatly improved performance and reduced power consumption. 16nm refers to the size of the transistors used in the chipset. Reduction in size of the chipset is good to improve the performance and to reduce leakage power. If the size of the transistor got reduced, it is possible to fit more transistors in the same place, thus, it improves the performance. That means from 28nm to 16nm (16nm the transistor fins are tightly paced, thinner and taller when compared to 28nm) the size of the transistor fin got reduced, hence improved the battery life and performance.
16nm FinFET Plus is the enhanced version of FinFET technology. 16nm FinFET plus process technology improves the performance by 65% and reduces power consumption by 70% when compared to 28nm (28nm is used in Snapdragon 616 processor) process.
28nm LP - Low power is a process that is first available on 28nm process technology, which is mainly used for low standby power application.
Faster network
Kirin 650 supports TD-LTE, FDD-LTE, TD-SDCMA, WCDMA, GSM and CDMA networks (also offer Dual standby dual SIM and VoLTE). It is also equipped with Cat.7 LTE, which is capable of downloading 300Mbits/s. Whereas Snapdragon 616 supports LTE FDD, LTE TDD, WCDMA, GSM/EDGE, CDMA, TD-SCDMA and it is also equipped with Cat. 4 technology.
Long Term Evolution (LTE) is a standard for high-speed wireless data communication. The LTE Categories (Cat.) is mainly used to describe the capability (downloading and uploading capability) of the network. There are 11 different categories (Cat.0 to Cat.10) where Kirin 650 uses Cat.7 and Snapdragon 616 uses Cat.4. Cat.7 is capable of downloading 300Mbits/s and uploading 150Mbits/s whereas, cat. 4 is capable of downloading only 150Mbits/s and uploading 50Mbits/s. Thus, the maximum data rate is achieved by Kirin 650 when compared with Snapdragon 616.
Which SoC is best for the Smartphone?
Major silicon chipset on the market uses 28nm or 20nm technology but the Huawei Kirin 650 uses 16nm FinFET plus technology which is two generation ahead when compared with other technology. Both SoC’s has octa core and dual Cortex A53. But still Kirin 650 has better data flow or clock speed when compared to snapdragon 616. The clock speed of Kirin 650 is 2.0 GHz for the first cluster and 1.7 GHz for the second, where Snapdragon 616 comes with a clock speed of 1.7 GHz and 1.2 GHz. When compared with Cat.4, Cat.7 has more downloading and uploading capability. The Huawei Kirin 650 is using this Cat.7 LTE and Snapdragon 616 uses Cat.4 LTE. Overall, Huawei Kirin 650 is the fastest chipset with high performance.
ninu elza said:
means Kirin 650 is faster and smarter than Snapdragon 616.
Click to expand...
Click to collapse
How about a comparison b/w Snapdragon 650/652 and Kirin 650? How does they stack up against each other?
In theory Kirin should perform better as it has new 16nm soc, with an i5 co-processor which is a really low power consuming unit that assists the main processor by helping it perform certain complex calculations and operate in a low power always sensing mode.
Would love to see a comparison.
Anyway, hope Huawei opens up its sources for developers so we could have amazing customizations on kirin chips too. :good: