[INFO]Processor 101 - Galaxy S 4 General

New processors come out everyday and you are like oh my god which one do I buy which one??
Well here the answer to all your processor related queries!!
Qualcomm Snapdragon​
Qualcomm continues to do what Qualcomm does best – produce a range of high quality chips with everything that handset manufactures need already built in. This time last quarter, we were taking our first look at the upcoming Snapdragon 600 processors which would be replacing the older S4 Pro, another incredibly popular Qualcomm processor.
Qualcomm doesn’t use the exact specification for the Cortex A15, it licenses the architecture from ARM which it then implements into its own Krait CPU cores, the newest version of which, the Krait 300, has shown up in the new Snapdragon 600 SoC...
Since then, a range of handsets powered by Qualcomm’s newest chips have appeared on the market, the flagship Samsung Galaxy S4 and HTC One being the two most notable models which are both some of the best performing smartphones on the market. Performance wise, the Snapdragon 600 has proven to be a decent enough jump up from the previous generation, performing well in most benchmark tests.
We’ve also started to hear about a few devices featuring the lower end Snapdragon 400 and 200 chips, with a range of entry level processors using various ARM architectures heading to the market in the near future. So far this year high end smartphones have received the biggest performance improvements, but these new chips should give the midrange a much needed boost later in the year.
So whilst Snapdragon 600 is certainly the most popular high-end chip on the market right now, we’ve already started to see our first snippets at Qualcomm’s next big thing, the Snapdragon 800.
Click to expand...
Click to collapse
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
There’s been lots of official and unofficial data floating around over the past few months regarding this new chip, and from, what we can tell, it looks to be one powerful piece of tech. Qualcomm demoed some of the new chip’s improved 3D performance earlier in the year, and more recently we’ve seen a few benchmarks popping up for new devices, which place the Snapdragon 800 at the top of the benchmark scores come it’s release.
First, there was the Pantech IM-A880 smartphone, which scored an impressive 30133 in the popular Antutu benchmark, followed by the rumoured beefed up version of the Galaxy S4, and most recently the new Xperia Z Ultra which pulled in the most impressive score yet, a whopping 32173. We’ve also seen some more official looking benchmarks from AnandTech and Engadget which confirm the Antutu scores of above and around 30,000, and also gives us a good look at how the chip performs in a range of other tests. The conclusion — it’s a bit of a beast.
These notable benchmarks scores are no doubt down to the new higher clocked Krait 400 CPU cores and the new Adreno 330 GPU, which is supposed to offer around a 50% performance improvement over the already quick Adreno 320. The test results we’ve seen have shown that the Snapdragon 800s CPU is fine compared with the current crop of processors, but the chip really shines through when it comes to GPU performance, which has proven to be even quicker than the Tegra 4 and iPad 4 chips.
We’ve already seen that Qualcomm is taking graphics extra seriously with its latest chip, as the Snapdragon 800 became the first processor to receive OpenGL ES 3 certification and is compliant with all the big graphics APIs.
Quite a few upcoming top of the line handsets are rumored to be utilizing Qualcomm’s latest processor, including the Galaxy S4 LTE-A, Oppo Find 7, and an Xperia Z refresh as well, so the Snapdragon 800 is perhaps the biggest chip to look out for in the coming months
Click to expand...
Click to collapse
Exynos 5 Octa​
Moving away from Qualcomm, there was certainly a lot of hype surrounding Samsung’s octo-core monster of a processor. Upon release, the chip mostly lived up to expectations — the Exynos version of the Galaxy S4 topped our performance charts and is currently the fastest handset on the market. The SoC is the first to utilize the new big.LITTLE architecture, with four new Cortex A15 cores to provide top of the line peak performance, and four older low power Cortex A7s to keep idle and low performance power consumption to a minimum.
The chip is certainly one of the best when it comes to peak performance, but it has had its share of troubled when it comes to balancing power consumption and performance. If you’re in the market for the fastest smartphone currently around, then the Galaxy S4 is the one to pick right now, providing that it’s available in your region. It has the fastest CPU currently on the market, and its PowerVR SGX544 tri-core GPU matches that of the latest iPad. But with the Snapdragon 800 just around the corner, there could soon be a new processor sitting on the performance throne.
Looking forward, it’s difficult to see the Exynos retaining its top spot for much longer. Other companies are starting to look beyond the power-hungry Cortex A15 architecture, but Samsung hasn’t yet unveiled any new plans.
Click to expand...
Click to collapse
Intel Clover Trail+ and Baytrail​
Speaking of which, perhaps the biggest mover this year has been Intel, and although the company still isn’t competing with ARM in terms of the number of design wins, Intel has finally show off some products which will pose a threat to ARM’s market dominance.
Although we’ve been hearing about Clover Trail+ since last year, the chip is now moving into full swing, with a few handsets arriving which are running the chip, and some of the benchmarks we’ve seen are really quite impressive. Clover Trail+ has managed to find the right balance between performance and power consumption, unlike previous Atom chips which been far too slow to keep up with the top of the line ARM-based processors.
Then there’s Baytrail. Back at Mobile World Congress earlier in the year, Intel laid out its plans for its Clover Trail+, but we’ve already heard information about the processor’s successor. Intel claims that its new Silvermont cores will further improve on both energy efficiency and peak performance. It sounds great on paper, but we always have to take these unveilings with a pinch of salt. What we are most likely looking at with Baytrail is a decent performance improvement, which should keep the processor ahead of the current Cortex A15 powered handsets in the benchmarks, but energy improvements are likely to come in the form of idle power consumption and low power states, rather than saving energy at the peak performance levels
Click to expand...
Click to collapse
But Intel isn’t just interested in breaking into the smartphone and tablet markets with its new line-up of processors. The company is still very much focused on producing chips for laptops. One particularly interesting prospect is the confirmed new generation of Android based netbooks and laptops powered by more robust Intel processors, which could give Microsoft a real run for their money.
Intel has clarified that it will also be assigning the additional Pentium and Celeron titles to its upcoming Silvermont architecture as well as using it in the new BayTrail mobile chips. What this potentially means is a further blurring of the line between tablets and laptops, where the same processor technology will be powering a range of Intel based products. I’m expecting the performance rankings to go from Baytrail for phones and tablets, to Celeron for notebooks, and Pentium chips for small laptops, but this naming strategy hasn’t been confirmed yet. It’s also interesting to see where this will stack up with Intel’s newly released Haswell architecture, which is also aimed at providing power efficient solutions to laptops.
Taking all that into consideration, Baytrail has the potential to be a big game changer for Intel, as it could stand out well ahead of Samsung’s top of the line Exynos chips and will certainly rival the upcoming Qualcomm Snapdragon 800 processor. But we’ll be waiting until the end of the year before we can finally see what the chip can do. In the meantime, we’ll look forward to seeing if Clover Trail+ can finally win over some market share.
Click to expand...
Click to collapse
Nvidia Tegra 4 and 4i​
Nvidia, on the other hand, has had a much more subdued second quarter of the year. We already had many of the unveilings for its new Tegra 4 and Tegra 4i designs by the start of the year, and so far, no products have launched which are making use of Nvidia’s latest chips.
But we have seen quite a bit about Nvidia Shield, which will be powered by the new Tegra 4 chip, and it certainly looks to be a decent piece of hardware. There have also been some benchmarks floating around suggesting that the Tegra 4 is going to significantly outpace other Cortex A15 powered chips, but, without a significant boost in clock speeds, I doubt that the chip will be much faster regarding most applications.
Nvidia’s real strength obviously lies in its graphics technology, and the Tegra 4 certainly has that in spades. Nvidia, much like Qualcomm, has focused on making its new graphics chip compatible with all the new APIs, like OpenGL ES 3.0 and DirectX 11, which will allow the chip to make use of improved graphical features when gaming. But it’s unclear as to whether that will be enough to win over manufactures or consumers.
The Tegra 4i has been similarly muted, without any handsets yet confirmed to be using the chip and we haven’t really heard much about performance either. We already know that the Tegra 4i certainly isn’t aiming to compete with top of the line chips, as it’s only the older Cortex A9s in its quad-core, but with other processors already offering LTE integration, it’s tough to see smartphone manufactures leaping at Nvidia’s chip.
The Tegra 4 is set for release at the end of this quarter, with the Tegra 4i following later in the year. But such a delayed launch may see Nvidia risk missing the boat on this generation of processors as well, which may have something to do with Nvidia’s biggest announcement so far this year – its plan to license its GPU architecture.
This change in direction has the potential to turn Nvidia into the ARM of the mobile GPU market, allowing competing SoC manufacturers, like Samsung and Qualcomm, to use Nvidia’s graphics technology in their own SoCs. However, this will place the company in direct competition with the Mali GPUs from ARM and PowerVR GPUs from Imagination, so Nvidia’s Kepler GPUs will have shine through the competition. But considering the problems that the company had persuading handset manufacturers to adopt its Tegra 3 SoCs, this seems like a more flexible and potentially very lucrative backup plan rather than spending more time and money producing its own chips.
Click to expand...
Click to collapse
MediaTek Quad-cores​
But it’s not just the big powerhouse chip manufactures that have been introducing some new tech. MediaTek, known for its cheap lower performance processors, has recently announced a new quad-core chip named the MT8125, which will be targeted for use in tablets.
The new processor is built from four in-order ARM Cortex A7 cores clocked at 1.5Ghz, meaning that it’s not going to be an absolute powerhouse when it comes to processing capabilities. The SoC will also be making use of a PowerVR 5ZT series graphics chip, which will give it sufficient grunt when it comes to media applications as well, with support for full HD 1080p video playback and recording, as well as some power when it comes to games.
MediaTek chip
A fair bit has changed in the mobile processor space since we last took a look at the market earlier in the year. Here’s a round-up of all the mobile processor news for the second quarter of the year.
MediaTek is also taking a leaf out of Qualcomm’s book by designing the SoC to be an all in one solution. It will come with built in WiFi, Bluetooth, GPS and FM ratio units, and will also be available in three versions, for built-in HSPA+, 2G, or WiFi only variants. This should make the chip an ideal candidate for emerging market devices, as well as budget products in the higher-end markets.
Despite the quad-core CPU and modern graphics chip, the MT8125 is still aimed at being a power efficient solution for midrange and more budget oriented products. But thanks to improvements in mobile technologies and the falling costs of older components, this chip will still have enough juice to power through the most commonly used applications.
Early last month, MediaTek also announced that it has been working on its own big.LITTLE architecture, similar to that found in the Samsung Exynos 5 Octa. But rather than being an eight core powerhouse, MediaTek’s chip will just be making use of four cores in total.
The chip will be known as the MT8135 and will be slightly more powerful that the budget quad-core MT8125, as it will be using two faster Cortex A15 cores. These power hungry units will be backed up by two low power Cortex A7 cores, so it’s virtually the same configuration as the Exynos 5 Octa but in a 2-by-2 layout (2 A15s and 2 A7s) rather than 4-by-4 (4 A15s and 4 A7s).
But in typical MediaTek fashion, the company has opted to down clock the processor in order to make the chip more energy efficient, which is probably a good thing considering that budget devices tend to ship with smaller batteries. The processor will peak at just 1Ghz, which isn’t super slow, but it is nearly half the speed of the A15s found in the Galaxy S4. But performance isn’t everything, and I’m more than happy to see a company pursue energy efficiency over clock speed and number of cores for once, especially if it brings big.LITTLE to some cheaper products.
Click to expand...
Click to collapse
Looking to the future​
ARM Cortex A57​
If you fancy a look even further ahead into the future, then we have also received a little bit of news regarding ARM’s successor to the A15, the all new Cortex A57. This new top of the line chip recently reached the “tape out” stage of development, but it’s still a way off from being released in any mobile products.
Cortex A50 performance chart
The Cortex A50 series is set to offer a significant performance improvement. Hopefully the big.LITTLE architecture will help balance out the power consumption.
ARM has hinted that its new chip can offer up to triple the performance of the current top of the line Cortex-A15 for the same amount of battery consumption. The new Cortex-A57 will also supposedly offer five times the amount of battery life when running at the same speed as its current chips, which sounds ridiculously impressive.
We heard a while back that AMD was working on a Cortex A57/A53 big.LITTLE processor chip as well, which should offer an even better balance of performance and energy efficiency than the current Exynos 5 Octa. But we’ll probably be waiting until sometime in 2014 before we can get our hands on these chips.​
The age of x64​
Speaking of ARM’s next line-up of processors, another important feature to pay attention to will be the inclusion of 64 bit processing technology and the new ARMv8 architecture. ARM’s new Cortex-A50 processor series will take advantage of 64 bit processing in order to improve the performance in more demanding scenarios, reduce power consumption, and take advantage of larger memory addresses for improved performance.
We’ve already seen a few mobile memory manufactures talk about production of high speed 4GB RAM chips, which can only be made use of with larger 64 bit memory addresses. With tablets and smartphones both in pursuit of ever higher levels of performance, x64 supported processors seem like a logical step.
So there you have it, I think that’s pretty much all of the big processor news over the past 3 months. Is there anything in particularly which has caught your eye, are you holding out for a device with a brand new SoC, or are the current crop of processors already plenty good enough for your mobile needs?
Click to expand...
Click to collapse

Reserved

Great thread, Again.:good:

This is better suited for the general General forum. But good job anyway.

Good job, mate!

Nicely written. I enjoyed reading that.
Sent from my GT-I9500 using Tapatalk 4 Beta

Well done. Good read :thumbup:
TEAM MiK
MikROMs Since 3/13/11

Related

A good OMAP 3640 vs snapdragon vs humming bird article

CPU performance from the new TI OMAP 3640 (yes, they’re wrong again, its 3640 for the 1 GHz SoC, 3630 is the 720 MHz one) is surprisingly good on Quadrant, the benchmarking tool that Taylor is using. In fact, as you can see from the Shadow benchmarks in the first article, it is shown outperforming the Galaxy S, which initially led me to believe that it was running Android 2.2 (which you may know can easily triple CPU performance). However, I’ve been assured that this is not the case, and the 3rd article seems to indicate as such, given that those benchmarks were obtained using a Droid 2 running 2.1.
Now, the OMAP 3600 series is simply a 45 nm version of the 3400 series we see in the original Droid, upclocked accordingly due to the reduced heat and improved efficiency of the smaller feature size.
If you need convincing, see TI’s own documentation: http://focus.ti.com/pdfs/wtbu/omap3_pb_swpt024b.pdf
So essentially the OMAP 3640 is the same CPU as what is contained in the original Droid but clocked up to 1 GHz. Why then is it benchmarking nearly twice as fast clock-for-clock (resulting in a nearly 4x improvement), even when still running 2.1? My guess is that the answer lies in memory bandwidth, and that evidence exists within some of the results from the graphics benchmarks.
We can see from the 3rd article that the Droid 2’s GPU performs almost twice as fast as the one in the original Droid. We know that the GPU in both devices are the same model, a PowerVR SGX 530, except that the Droid 2’s SGX 530 is, as is the rest of the SoC, on the 45 nm feature size. This means that it can be clocked considerably faster. It would be easy to assume that this is reason for the doubled performance, but that’s not necessarily the case. The original Droid’s SGX 530 runs at 110 MHz, substantially less than its standard clock speed of 200 MHz. This downclocking is likely due to the memory bandwidth limitations I discussed in my Hummingbird vs Snapdragon article, where the Droid original was running LPDDR1 memory at a fairly low bandwidth that didn’t allow for the GPU to function at stock speed. If those limitations were removed by adding LPDDR2 memory, the GPU could then be upclocked again (likely to around 200 MHz) to draw even with the new memory bandwidth limit, which is probably just about twice what it was with LPDDR1.
So what does this have to do with CPU performance? Well, it’s possible that the CPU was also being limited by LPDDR1 memory, and that the 65 nm Snapdragons that are also tied down to LPDDR1 memory share the same problem. The faster LPDDR2 memory could allow for much faster performance.
Lastly, since we know from the second article at the top that the Galaxy S performs so well with its GPU, why is it lacking in CPU performance, only barely edging past the 1 GHz Snapdragon?
It could be that the answer lies in the secret that Samsung is using to achieve those ridiculously fast GPU speeds. Even with LPDDR2 memory, I can’t see any way that the GPU could achieve 90 Mtps; the required memory bandwidth is too high. One possibility is the addition of a dedicated high-speed GPU memory cache, allowing the GPU access to memory tailored to handle its high-bandwidth needs. With this solution to memory bandwidth issues, Samsung may have decided that higher speed memory was unnecessary, and stuck with a slower solution that remains limited in the same manner as the current-gen Snapdragon.
Lets recap: TI probably dealt with the limitations to its GPU by dropping in higher speed system RAM, thus boosting overall system bandwidth to nearly double GPU and CPU performance together.
Samsung may have dealt with limitations to the GPU by adding dedicated video memory that boosted GPU performance several times, but leaving CPU performance unaffected.
This, I think, is the best explanation to what I’ve seen so far. It’s very possible that I’m entirely wrong and something else is at play here, but that’s what I’ve got.
Click to expand...
Click to collapse
CPU Performance
Before I go into details on the Cortex-A8, Snapdragon, Hummingbird, and Cortex-A9, I should probably briefly explain how some ARM SoC manufacturers take different paths when developing their own products. ARM is the company that owns licenses for the technology behind all of these SoCs. They offer manufacturers a license to an ARM instruction set that a processor can use, and they also offer a license to a specific CPU architecture.
Most manufacturers will purchase the CPU architecture license, design a SoC around it, and modify it to fit their own needs or goals. T.I. and Samsung are examples of these; the S5PC100 (in the iPhone 3GS) as well as the OMAP3430 (in the Droid) and even the Hummingbird S5PC110 in the Samsung Galaxy S are all SoCs with Cortex-A8 cores that have been tweaked (or “hardened”) for performance gains to be competitive in one way or another. Companies like Qualcomm however will build their own custom processor architecture around a license to an instruction set that they’ve chosen to purchase from ARM. This is what the Snapdragon’s Scorpion processor is, a completely custom implementation that shares some similarities with Cortex-A8 and uses the same ARMv7 instruction set, but breaks away from some of the limitations that the Cortex-A8 may impose.
Qualcomm’s approach is significantly more costly and time consuming, but has the potential to create a processor that outperforms the competition. Through its own custom architecture configuration, (which Qualcomm understandably does not go into much detail regarding), the Scorpion CPU inside the Snapdragon SoC gains an approximate 5% improvement in instructions per clock cycle over an ARM Cortex-A8. Qualcomm appeals to manufacturers as well by integrating features such as GPS and cell network support into the SoC to reduce the need of a cell phone manufacturer having to add additional hardware onto the phone. This allows for a more compact phone design, or room for additional features, which is always an attractive option. Upcoming Snapdragon SoCs such as the QSD8672 will allow for dual-core processors (not supported by Cortex-A8 architecture) to boost processing power as well as providing further ability to scale performance appropriately to meet power needs. Qualcomm claims that we’ll see these chips in the latter half of 2010, and rumor has it that we’ll begin seeing them show up first in Windows Mobile 7 Series phones in the Fall. Before then, we may see a 45 nm version of the QSD8650 dubbed “QSD8650A” released in the Summer, running at 1.3 GHz.
You might think that the Hummingbird doesn’t stand a chance against Qualcomm’s custom-built monster, but Samsung isn’t prepared to throw in the towel. In response to Snapdragon, they hired Intrinsity, a semiconductor company specializing in tweaking processor logic design, to customize the Cortex-A8 in the Hummingbird to perform certain binary functions using significantly less instructions than normal. Samsung estimates that 20% of the Hummingbird’s functions are affected, and of those, on average 25-50% less instructions are needed to complete each task. Overall, the processor can perform tasks 5-10% more quickly while handling the same 2 instructions per clock cycle as an unmodified ARM Cortex-A8 processor, and Samsung states it outperforms all other processors on the market (a statement seemingly aimed at Qualcomm). Many speculate that it’s likely that the S5PC110 CPU in the Hummingbird will be in the iPhone HD, and that its sister chip, the S5PV210, is inside the Apple A4 that powers the iPad. (UPDATE: Indications are that the model # of the SoC in the Apple iPad’s A4 is “S5L8930”, a Samsung part # that is very likely closely related to the S5PV210 and Hummingbird. I report and speculate upon this here.)
Lastly, we really should touch upon Cortex-A9. It is ARM’s next-generation processor architecture that continues to work on top of the tried-and-true ARMv7 instruction set. Cortex-A9 stresses production on the 45 nm scale as well as supporting multiple processing cores for processing power and efficiency. Changes in core architecture also allow a 25% improvement in instructions that can be handled per clock cycle, meaning a 1 GHz Cortex-A9 will perform considerably quicker than a 1 GHz Cortex-A8 (or even Snapdragon) equivalent. Other architecture improvements such as support for out-of-order instruction handling (which, it should be pointed out, the Snapdragon partially supports) will allow the processor to have significant gains in performance per clock cycle by allowing the processor to prioritize calculations based upon the availability of data. T.I. has predicted its Cortex-A9 OMAP4440 to hit the market in late 2010 or early 2011, and promises us that their OMAP4 series will offer dramatic improvements over any Cortex-A8-based designs available today.
GPU performance
There are a couple problems with comparing GPU performance that some recent popular articles have neglected to address. (Yes, that’s you, AndroidAndMe.com, and I won’t even go into a rant about bad data). The drivers running the GPU, the OS platform it’s running on, memory bandwidth limitations as well as the software itself can all play into how well a GPU runs on a device. In short: you could take identical GPUs, place them in different phones, clock them at the same speeds, and see significantly different performance between them.
For example, let’s take a look at the iPhone 3GS. It’s commonly rumored to contain a PowerVR SGX 535, which is capable of processing 28 million triangles per second (Mt/s). There’s a driver file on the phone that contains “SGX535” in the filename, but that shouldn’t be taken as proof as to what it actually contains. In fact, GLBenchmark.com shows the iPhone 3GS putting out approximately 7 Mt/s in its graphics benchmarks. This initially led me to believe that the iPhone 3GS actually contained a PowerVR SGX 520 @ 200 MHz (which incidentally can output 7 Mt/s) or alternatively a PowerVR SGX 530 @ 100 MHz because the SGX 530 has 2 rendering pipelines instead of the 1 in the SGX 520, and tends to perform about twice as well. Now, interestingly enough, Samsung S5PC100 documentation shows the 3D engine as being able to put out 10 Mt/s, which seemed to support my theory that the device does not contain an SGX 535.
However, the GPU model and clock speed aren’t the only limiting factors when it comes to GPU performance. The SGX 535 for example can only put out its 28 Mt/s when used in conjunction with a device that supports the full 4.2 GB per second of memory bandwidth it needs to operate at this speed. Assume that the iPhone 3GS uses single-channel LPDDR1 memory operating at 200 MHz on a 32-bit bus (which is fairly likely). This allows for 1.6 GB/s of memory bandwidth, which is approximately 38% of what the SGX 535 needs to operate at its peak speed. Interestingly enough, 38% of 28 Mt/s equals just over 10 Mt/s… supporting Samsung’s claim (with real-world performance at 7 Mt/s being quite reasonable). While it still isn’t proof that the iPhone 3GS uses an SGX 535, it does demonstrate just how limiting single-channel memory (particularly slower memory like LPDDR1) can be and shows that the GPU in the iPhone 3GS is likely a powerful device that cannot be used to its full potential. The GPU in the Droid likely has the same memory bandwidth issues, and the SGX 530 in the OMAP3430 appears to be down-clocked to stay within those limitations.
But let’s move on to what’s really important; the graphics processing power of the Hummingbird in the Samsung Galaxy S versus the Snapdragon in the EVO 4G. It’s quickly apparent that Samsung is claiming performance approximately 4x greater than the 22 Mt/s the Snapdragon QSD8650’s can manage. It’s been rumored that the Hummingbird contains a PowerVR SGX 540, but at 200 MHz the SGX 540 puts out 28 Mt/s, approximately 1/3 of the 90 Mt/s that Samsung is claiming. Either Samsung has decided to clock an SGX 540 at 600 MHz, which seems rather high given reports that the chip is capable of speeds of “400 MHz+” or they’ve chosen to include a multi-core PowerVR SGX XT solution. Essentially this would allow 3 PowerVR cores (or 2 up-clocked ones) to hit the 90 Mt/s mark without having to push the GPU past 400 MHz.
Unfortunately however, this brings us right back to the memory bandwidth limitation argument again, because while the Hummingbird likely uses LPDDR2 memory, it still only appears to have single-channel memory controller support (capping memory bandwidth off at 4.2 GB/s), and the question is raised as to how the PowerVR GPU obtains the large amount of memory bandwidth it needs to draw and texture polygons at those high speeds. If the PowerVR SGX 540 (which, like the SGX 535 performs at 28 Mt/s at 200 MHz) requires 4.2 GB/s of memory bandwidth, drawing 90 Mt/s would require over 12.6 GB/s of memory bandwidth, 3 times what is available. Samsung may be citing purely theoretical numbers or using another solution such as possibly increasing GPU cache sizes. This would allow for higher peak speeds, but it’s questionable if it could achieve sustainable 90 Mt/s performance.
Qualcomm differentiates itself from most of the competition (once again) by using its own graphics processing solution. The company bought AMD’s Imageon mobile-graphics division in 2008, and used AMD’s Imageon Z430 (now rebranded Adreno 200) to power the graphics in the 65 nm Snapdragons. The 45 nm QSD8650A will include an Adreno 205, which will provide some performance enhancements to 2D graphics processing as well as hardware support for Adobe Flash. It is speculated that the dual-core Snapdragons will utilize the significantly more powerful Imageon Z460 (or Adreno 220), which apparently rivals the graphics processing performance of high-end mobile gaming systems such as the Sony PlayStation Portable. Qualcomm is claiming nearly the same performance (80 Mt/s) as the Samsung Hummingbird in its upcoming 45 nm dual-core QSD8672, and while LPDDR2 support and a dual-channel memory controller are likely, it seems pretty apparent that, like Samsung, something else must be at play for them to achieve those claims.
While Samsung and Qualcomm tend to stay relatively quiet about how they achieve their graphics performance, T.I. has come out and specifically stated that its upcoming OMAP4440 SoC supports both LPDDR2 and a dual-channel memory controller paired with a PowerVR SGX 540 chip to provide “up to 2x” the performance of its OMAP3 line. This is a reasonable claim assuming the SGX 540 is clocked to 400 MHz and requires a bandwidth of 8.5 GB/s which can be achieved using LPDDR2 at 533 MHz in conjunction with the dual-channel controller. This comparatively docile graphics performance may be due to T.I’s rather straightforward approach to the ARM Cortex-A9 configuration.
Power Efficiency
Moving onward, it’s also easily noticeable that the next generation chipsets on the 45 nm scale are going to be a significant improvement in terms of performance and power efficiency. The Hummingbird in the Samsung Galaxy S demonstrates this potential, but unfortunately we still lack the power consumption numbers we really need to understand how well it stacks up against the 65 nm Snapdragon in the EVO 4G. It can be safely assumed that the Galaxy S will have overall better battery life than the EVO 4G given the lower power requirements of the 45 nm chip, the more power-efficient Super AMOLED display, as well as the fact that both phones sport equal-capacity 1500mA batteries. However it should be noted that the upcoming 45 nm dual-core Snapdragon is claimed to be coming with a 30% decrease in power needs, which would allow the 1.5 GHz SoC to run at nearly the same power draw of the current 1 GHz Snapdragon. Cortex-A9 also boasts numerous improvements in efficiency, claiming power consumption numbers nearly half that of the Cortex-A8, as well as the ability to use multiple-core technology to scale processing power in accordance with energy limitations.
While it’s almost universally agreed that power efficiency is a priority for these processors, many criticize the amount of processing power these new chips are bringing to mobile devices, and ask why so much performance is necessary. Whether or not mobile applications actually need this much power is not really the concern however; improved processing and graphics performance with little to no additional increase in energy needs will allow future phones to actually be much more efficient in terms of power. This is because ultimately, power efficiency relies in a big part on the ability of the hardware in the phone to complete a task quickly and return to an idle state where it consumes very little power. This “burst” processing, while consuming fairly high amounts of power for very short periods of time, tends to be more economical than prolonged, slower processing. So as long as ARM chipset manufacturers can continue to crank up the performance while keeping power requirements low, there’s nothing but gains to be had.
Click to expand...
Click to collapse
http://alienbabeltech.com/main/?p=19309
http://alienbabeltech.com/main/?p=17125
its a good read for noobs like me, also read the comments as there is lots of constructive criticism [that actually adds to the information in the article]
Kind of wild to come across people quoting me when I'm just Googling the web for more info.
I'd just like to point out that I was probably wrong on the entire first part about the 3640. I can't post links yet, but Google "Android phones benchmarked; it's official, the Galaxy S is the fastest." for my blog article on why.
And the reason I'm out here poking around for more information is because AnandTech.com (well known for their accurate and detailed articles) just repeatedly described the SoC in the Droid X as a OMAP 3630 instead of the 3640.
EDIT - I've just found a blog on TI's website that calls it a 3630. I guess that's that! I need to find a TI engineer to make friends with for some inside info.
Anyhow, thanks for linking my work!
Make no mistake, OMAP 3xxx series get left in the dust by the Hummingbird.
Also, I wouldn't really say that Samsung hired Intrinsity to make the CPU - they worked together. Intrinsity is owned by Apple, the Hummingbird is the same core as the A4, but with a faster graphics processor - the PowerVR SGX 540.
There was a bug in the Galaxy S unit they tested, which was later confirmed in the authors own comments later on.

[Q] Is Texas Instruments getting behind?

After Galaxy Nexus and Razr I don´t see any company announcing that they are going to use their chips.
Thats true that OMAP 4430/4460 (1.0 / 1.5Ghz respectively) with SGX540 are behind, especially behind Qualcomm S4 and quad-core Exynos. They were also supposed to release OMAP 4470 (1.8GHz) with SGX544 but I haven't seen it used in any device.
I'm not sure here, but I think that OMAP's don't feature wireless radio technologies, so phone manufacturesrs have to add additional chips what raises the cost.
TI annonced OMAP 5 long time ago, but I don't know what is the progress. OMAP 5 is suppsoed to have dualcore A15 ("Up to 2GHz") + dualcore M4 and SGX544-MPx.
Well TI is going to release the OMAP 5 soon. One of the pros of this processor is that it is easily customizable, and thats why it's used in the Galaxy Nexus. So who knows? OMAP 5 may appear in the next series of Nexus phones.
Yeah idk, this year has been dominated by qualcomm (i hope thats right) and exynos (in the GS3 international) but that is true, Ti seems unusually absent this year.
They are making the Omap 5. 28nm tech has loads of problems,especially high spoilt counts at the factory. Qualcomm are actually switching factories cause of this. TI have stable chips at relatively lower cost. When it's ready you'll see it everywhere.
Sent from my U8150 using XDA
Texas Instruments withdraws from smartphones
Texas Instruments is dropping from the system-on-chip for smartphones and tablets manufacturing and will give up on its OMAP lineup.
The company’s OMAP boards are less and less popular among mobile manufacturers – most of them bet on Qualcomm, while Samsung and Apple are developing their own solutions (Exynos, A6). The major disadvantage of the OMAP chipset is the lack of on-board 3G/4G modem.
That forces manufacturers who rely on OMAP chipsets to use additional radio chips, which increases battery consumption and production costs. Now you understand why smartphone manufacturers prefer Qualcomm’s complete solutions, rather than this expensive process.
TI says its focus will shift on “to a broader market including industrial clients like carmakers”, though it did not announce specifics and the investors were left wondering.
Anyway, TI will continue to support its current clients, but will significantly reduce efforts on developing new OMAP chipsets.
The news might come shocking for some, as the TI OMAP 5 was expected to be the first chipset with dual Cortex-A15 CPU, and now it's fate is uncertain. Nonetheless, TI OMAP's presence was barely felt on the market, so the company's exit won't create too much of a disturbance.
http://www.gsmarena.com/texas_instruments_backtracks_from_smartphones_goodbye_omap-news-4861.php
http://www.reuters.com/article/2012/09/25/texasinstruments-wireless-idUSL1E8KP5FN20120925?irpc=932
They had great power management on their chips. Too bad.
Sent from my U8150 using XDA

[SAMSUNG] to unveil [8-CORE] arm chip

Eight cores, in a mobile processor? Balderdash! But according to EETimes, that's just what Samsung's planning on unveiling in February at the International Solid-State Circuits Conference (that sounds so exciting).
Now before you get too excited, this isn't - technically speaking - an eight-core processor. It's a dual quad-core, which is to say, a two-processor chip. The design is based on a reference architecture thought up by ARM themselves, dubbed "big.little," and is designed to combine the light-load battery life of a high-efficiency quad-core 28nm ARM A7 chip with a super-hi-po A15 processor for heavy lifting. The exact specifications, for our nerdier readers, are: 1 quad-core ARM A7 chip clocked at 1.2GHz for everyday tasks, and 1 quad-core ARM A15 chip clocked at 1.8GHz w/ 2MB L2 cache for processor-intensive tasks like video games.
ARM itself has said the "big.little" project is delivering benefits beyond those expected when the architecture was initially announced, and Samsung's chip should be the first on the market based on the concept. So yes, this will be a new Exynos of some sort.
Should you expect this chip in the Galaxy S IV (or whatever Samsung's going to call it - because that's far from a given)? It's possible, but not necessarily likely. The gap between chip announcement and tape-out (mass-production readiness) can be lengthy. With the first batch of Exynos 5 Dual devices just now hitting the market in the form of the new Samsung Chromebook and Nexus 10, this eight-core beast may not be ready in time for the next "next big thing." Samsung could very well specifically be targeting this chip for Chromebooks and Windows RT / Android tablets before taking a dive into smaller form factors, too.
Either way, it's exciting business - I can't say I ever tire of technology getting faster.
Click to expand...
Click to collapse
to be honest lately i have started to lose interest in Samsung due to the whole exynos issue and lack of support for developers but if this is to be true then i feel comfortable in making my next device a Samsung (only with this chip ovcourse) lets hope we see this chip come to more devices if it is infact released we will have to wait and see what samsung brings us in 2013 to decide if our loyalty to samsung is acctually worth it
courtesy of android police
Sent from my GT-I9300 using xda premium

TEGRA 4 - 1st possible GLBenchmark!!!!!!!! - READ ON

Who has been excited by the Tegra 4 rumours?, last night's Nvidia CES announcement was good, but what we really want are cold-hard BENCHMARKS.
I found an interesting mention of Tegra T114 SoC on a Linux Kernel site, which I've never heard of. I got really interested when it stated that the SoC is based on ARM A15 MP, it must be Tegra 4. I checked the background of the person who posted the kernel patch, he is a senior Nvidia Kernel engineer based in Finland.
https://lkml.org/lkml/2012/12/20/99
"This patchset adds initial support for the NVIDIA's new Tegra 114
SoC (T114) based on the ARM Cortex-A15 MP. It has the minimal support
to allow the kernel to boot up into shell console. This can be used as
a basis for adding other device drivers for this SoC. Currently there
are 2 evaluation boards available, "Dalmore" and "Pluto"."
On the off chance I decided to search www.glbenchmark.com for the 2 board names, Dalmore (a tasty whisky!) and Pluto (Planet, Greek God and cartoon dog!) Pluto returned nothing, but Dalmore returned a device called 'Dalmore Dalmore' that was posted on 3rd January 2013. However the OP had already deleted them, but thanks to Google Cache I found the results
RESULTS
GL_VENDOR NVIDIA Corporation
GL_VERSION OpenGL ES 2.0 17.01235
GL_RENDERER NVIDIA Tegra
From System spec, It runs Android 4.2.1, a Min frequency of 51 MHz and Max of 1836 Mhz
Nvidia DALMORE
GLBenchmark 2.5 Egypt HD C24Z16 - Offscreen (1080p) : 32.6 fps
iPad 4
GLBenchmark 2.5 Egypt HD C24Z16 - Offscreen (1080p): 49.6 fps
CONCLUSION
Anandtech has posted that Tegra 4 doesn't use unified shaders, so it's not based on Kepler. I reckon that if Nvidia had a brand new GPU they would have shouted about it at CES, the results I've found indicate that Tegra 4 is between 1 to 3 times faster than Tegra 3.
BUT, this is not 100% guaranteed to be a Tegra 4 system, but the evidence is strong that it is a T4 development board. If this is correct, we have to figure that it is running beta drivers, Nexus 10 is ~ 10% faster than the Arndale dev board with the same Exynos 5250 SoC. Even if Tegra 4 gets better drivers, it seems like the SGX 544 MP4 in the A6X is still the faster GPU, with Tegra 4 and Mali T604 being an almost equal 2nd. Nvidia has said that T4 is faster than A6X, but the devil is in the detail, in CPU benchmarks I can see that being true, but not for graphics.
UPDATE - Just to add to the feeling that that this legit, the GLBenchmark - System section lists the "android.os.Build.USER" as buildbrain. Buildbrain according to a Nvidia job posting is "Buildbrain is a mission-critical, multi-tier distributed computing system that performs mobile builds and automated tests each day, enabling NVIDIA's high performance development teams across the globe to develop and deliver NVIDIA's mobile product line"
http://jobsearch.naukri.com/job-lis...INEER-Nvidia-Corporation--2-to-4-130812500024
I posted the webcache links to GLBenchmark pages below, if they disappear from cache, I've saved a copy of the webpages, which I can upload, Enjoy
GL BENCHMARK - High Level
http://webcache.googleusercontent.c...p?D=Dalmore+Dalmore+&cd=1&hl=en&ct=clnk&gl=uk
GL BENCHMARK - Low Level
http://webcache.googleusercontent.c...e&testgroup=lowlevel&cd=1&hl=en&ct=clnk&gl=uk
GL BENCHMARK - GL CONFIG
http://webcache.googleusercontent.c...Dalmore&testgroup=gl&cd=1&hl=en&ct=clnk&gl=uk
GL BENCHMARK - EGL CONFIG
http://webcache.googleusercontent.c...almore&testgroup=egl&cd=1&hl=en&ct=clnk&gl=uk
GL BENCHMARK - SYSTEM
http://webcache.googleusercontent.c...ore&testgroup=system&cd=1&hl=en&ct=clnk&gl=uk
OFFSCREEN RESULTS
http://webcache.googleusercontent.c...enchmark.com+dalmore&cd=4&hl=en&ct=clnk&gl=uk
http://www.anandtech.com/show/6550/...00-5th-core-is-a15-28nm-hpm-ue-category-3-lte
Is there any Gpu that could outperform iPad4 before iPad5 comes out? adreno 320, t Mali 604 now tegra 4 aren't near it. Qualcomm won't release anything till q4 I guess, and tegra 4 has released too only thing that is left is I guess is t Mali 658 coming with exynos 5450 (doubtfully when it would release, not sure it will be better )
Looks like apple will hold the crown in future too .
i9100g user said:
Is there any Gpu that could outperform iPad4 before iPad5 comes out? adreno 320, t Mali 604 now tegra 4 aren't near it. Qualcomm won't release anything till q4 I guess, and tegra 4 has released too only thing that is left is I guess is t Mali 658 coming with exynos 5450 (doubtfully when it would release, not sure it will be better )
Looks like apple will hold the crown in future too .
Click to expand...
Click to collapse
There was a great article on Anandtech that tested the power consumption of the Nexus 10's Exynos 5250 SoC, it showed that both the CPU and GPU had a TDP of 4W, making a theoretical SoC TDP of 8W. However when the GPU was being stressed by running a game, they ran a CPU benchmark in the background, the SoC quickly went up to 8W, but the CPU was quickly throttled from 1.7 GHz to just 800 Mhz as the system tried to keep everything at 4W or below, this explained why the Nexus 10 didn't benchmark as well as we wished.
Back to the 5450 which should beat the A6X, trouble is it has double the CPU & GPU cores of the 5250 and is clocked higher, even on a more advanced 28nm process, which will lower power consumption I feel that system will often be throttled because of power and maybe heat concerns, so it looks amazing on paper but may disappoint in reality, and a 5450 in smartphone is going to suffer even more.
So why does Apple have an advantage?, well basically money, for a start mapple fans will pay more for their devices, so they afford to design a big SoC and big batteries that may not be profitable to other companies. Tegra 4 is listed as a 80mm2 chip, iPhone 5 is 96mm2 and A6X is 123mm2, Apple can pack more transistors and reap the GPU performance lead, also they chosen graphics supplier Imagination Technologies have excellent products, Power VR Rogue will only increase Apple's GPU lead. They now have their own chip design team, the benefit for them has been their Swift core is almost as powerful as ARM A15, but seems less power hungry, anyway Apple seems to be happy running slower CPUs compared to Android. Until an Android or WP8 or somebody can achieve Apple's margins they will be able to 'buy' their way to GPU domination, as an Android fan it makes me sad:crying:
32fps is no go...lets hope it's not final
hamdir said:
32fps is no go...lets hope it's not final
Click to expand...
Click to collapse
It needs to, but it will be OK for a new Nexus 7
still faster enough for me, I dont game alot on my nexus 7.
I know I'm taking about phones here ... But the iPhone 5 GPU and adreno 320 are very closely matched
Sent from my Nexus 4 using Tapatalk 2
italia0101 said:
I know I'm taking about phones here ... But the iPhone 5 GPU and adreno 320 are very closely matched
Sent from my Nexus 4 using Tapatalk 2
Click to expand...
Click to collapse
From what I remember the iPhone 5 and the new iPad wiped the floor with Nexus 4 and 10. The ST-Ericsson Nova A9600 is likely to have a PowerVR Rogue GPU. Just can't wait!!
adityak28 said:
From what I remember the iPhone 5 and the new iPad wiped the floor with Nexus 4 and 10. The ST-Ericsson Nova A9600 is likely to have a PowerVR Rogue GPU. Just can't wait!!
Click to expand...
Click to collapse
That isn't true , check glbenchmark , in the off screen test the iPhone scored 91 , the nexus 4 scored 88 ... That ksnt wiping my floors
Sent from my Nexus 10 using Tapatalk HD
Its interesting how even though nvidia chips arent the best we still get the best game graphics because of superior optimization through tegra zone. Not even the a6x is as fully optimized.
Sent from my SAMSUNG-SGH-I727 using xda premium
ian1 said:
Its interesting how even though nvidia chips arent the best we still get the best game graphics because of superior optimization through tegra zone. Not even the a6x is as fully optimized.
Sent from my SAMSUNG-SGH-I727 using xda premium
Click to expand...
Click to collapse
What sort of 'optimisation' do you mean? un optimised games lag that's a big letdown and tegra effects can also be used on other phones too with chain fire 3d I use it and tegra games work without lag with effects and I don't have a tegra device
With a tegra device I am restricted to optimised games mostly
The graphic performance of NVIDIA SoCs is always disappointed, sadly for the VGA dominanting provider on the world.
The first Tegra2, the GPU is a little bit better than SGX540 of GalaxyS a little bit in benchmark, but lacking NEON support.
The second one Tegra 3, the GPU is nearly the same as the old Mali400Mp4 in GALAXY S2/Original Note.
And now it's better but still nothing special and outperformed soon (Adreno 330 and next-gen Mali)
Strongest PowerVR GPUs are always the best, but sadly they are exclusive for Apple only (SGX543 and maybe SGX 554 also, only Sony ,who has the cross-licencing with Apple, has it in PS Vita and in PS Vita only)
tegra optimization porting no longer works using chainfire, this is now a myth
did u manage to try shadowgun thd, zombie driver or horn? the answer is no, games that use t3 sdk for physx and other cpu graphics works can not be forced to work on other devices, equally chainfire is now outdated and no longer updated
now about power vr they are only better in real multicore configuration which is only used by apple and Sony's vita, eating large die area, ie actual multicore each with its own subcores/shaders, if tegra was used in real multi core it would destroy all
finally this is really funny all this doom n gloom because of an early discarded development board benchmark, I dont mean to take away from turbo's thunder and his find but truly its ridiculous the amount of negativity its is collecting before any type of final device benchs
adrena 220 doubled in performance after the ICS update on sensation
t3 doubled the speed of t2 gpu with only 50% the number of shaders so how on earth do you believe only 2x the t3 scores with 600% more shaders!!
do you have any idea how miserable the ps3 performed in its early days? even new desktop GeForces perform much less than expected until the drivers are updated
enough with the FUD! seems this board is full of it nowadays and so little reasoning...
For goodness sake, this isn't final hardware, anything could change. Hung2900 knows nothing, what he stated isn't true. Samsung has licensed PowerVR, it isn't just stuck to Apple, just that Samsung prefers using ARMs GPU solution. Another thing I dislike is how everyone is comparing a GPU in the iPad 4 (SGX554MP4) that will NEVER arrive in a phone compared a Tegra 4 which will arrive in a phone. If you check OP link the benchmark was posted on the 3rd of January with different results (18fps then 33fps), so there is a chance it'll rival the iPad 4. I love Tegra as Nvidia is pushing developers to make more better games for Android compared to the 'geeks' *cough* who prefers benchmark results, whats the point of having a powerful GPU if the OEM isn't pushing developers to create enhance effect games for there chip.
Hamdir is correct about the GPUs, if Tegra 3 was around 50-80% faster than Tegra 2 with just 4 more cores, I can't really imagine it only being 2x faster than Tegra 3. Plus its a 28nm (at around 80mm2 just a bit bigger than Tegra 3, smaller than A6 90mm2) along with the dual memory than single on Tegra 2/3.
Turbotab said:
There was a great article on Anandtech that tested the power consumption of the Nexus 10's Exynos 5250 SoC, it showed that both the CPU and GPU had a TDP of 4W, making a theoretical SoC TDP of 8W. However when the GPU was being stressed by running a game, they ran a CPU benchmark in the background, the SoC quickly went up to 8W, but the CPU was quickly throttled from 1.7 GHz to just 800 Mhz as the system tried to keep everything at 4W or below, this explained why the Nexus 10 didn't benchmark as well as we wished.
Back to the 5450 which should beat the A6X, trouble is it has double the CPU & GPU cores of the 5250 and is clocked higher, even on a more advanced 28nm process, which will lower power consumption I feel that system will often be throttled because of power and maybe heat concerns, so it looks amazing on paper but may disappoint in reality, and a 5450 in smartphone is going to suffer even more.
So why does Apple have an advantage?, well basically money, for a start iSheep will pay more for their devices, so they afford to design a big SoC and big batteries that may not be profitable to other companies. Tegra 4 is listed as a 80mm2 chip, iPhone 5 is 96mm2 and A6X is 123mm2, Apple can pack more transistors and reap the GPU performance lead, also they chosen graphics supplier Imagination Technologies have excellent products, Power VR Rogue will only increase Apple's GPU lead. They now have their own chip design team, the benefit for them has been their Swift core is almost as powerful as ARM A15, but seems less power hungry, anyway Apple seems to be happy running slower CPUs compared to Android. Until an Android or WP8 or somebody can achieve Apple's margins they will be able to 'buy' their way to GPU domination, as an Android fan it makes me sad:crying:
Click to expand...
Click to collapse
Well said mate!
I can understand what you feel, nowdays android players like samsung,nvidia are focusing more on CPU than GPU.
If they won't stop soon and continued to use this strategy they will fail.
GPU will become bottleneck and you will not be able use the cpu at its full potential. (Atleast when gaming)
i have Galaxy S2 exynos 4 1.2Ghz and 400mhz oc mali gpu
In my analysis most modern games like MC4,NFS:MW aren't running at 60FPS at all thats because GPU always have 100% workload and CPU is relaxing there by outputing 50-70% of total CPU workload
I know some games aren't optimize for all android devices as opposed to apple devices but still even high-end android devices has slower gpu (than ipad 4 atleast )
AFAIK, Galaxy SIV is likely to pack T-604 with some tweaks instead of mighty T-658 which is still slower than iPAddle 4
Turbotab said:
There was a great article on Anandtech that tested the power consumption of the Nexus 10's Exynos 5250 SoC, it showed that both the CPU and GPU had a TDP of 4W, making a theoretical SoC TDP of 8W. However when the GPU was being stressed by running a game, they ran a CPU benchmark in the background, the SoC quickly went up to 8W, but the CPU was quickly throttled from 1.7 GHz to just 800 Mhz as the system tried to keep everything at 4W or below, this explained why the Nexus 10 didn't benchmark as well as we wished.
Back to the 5450 which should beat the A6X, trouble is it has double the CPU & GPU cores of the 5250 and is clocked higher, even on a more advanced 28nm process, which will lower power consumption I feel that system will often be throttled because of power and maybe heat concerns, so it looks amazing on paper but may disappoint in reality, and a 5450 in smartphone is going to suffer even more.
So why does Apple have an advantage?, well basically money, for a start iSheep will pay more for their devices, so they afford to design a big SoC and big batteries that may not be profitable to other companies. Tegra 4 is listed as a 80mm2 chip, iPhone 5 is 96mm2 and A6X is 123mm2, Apple can pack more transistors and reap the GPU performance lead, also they chosen graphics supplier Imagination Technologies have excellent products, Power VR Rogue will only increase Apple's GPU lead. They now have their own chip design team, the benefit for them has been their Swift core is almost as powerful as ARM A15, but seems less power hungry, anyway Apple seems to be happy running slower CPUs compared to Android. Until an Android or WP8 or somebody can achieve Apple's margins they will be able to 'buy' their way to GPU domination, as an Android fan it makes me sad:crying:
Click to expand...
Click to collapse
Typical "isheep" reference, unnecessary.
Why does apple have the advantage? Maybe because there semiconductor team is talented and can tie the A6X+PowerVR GPU efficiently. NIVIDA should have focused more on GPU in my opinion as the CPU was already good enough. With these tablets pushing excess of 250+ppi the graphics processor will play a huge role. They put 72 cores in there processor. Excellent. Will the chip ever be optimized to full potential? No. So again they demonstrated a product that sounds good on paper but real world performance might be a different story.
MrPhilo said:
For goodness sake, this isn't final hardware, anything could change. Hung2900 knows nothing, what he stated isn't true. Samsung has licensed PowerVR, it isn't just stuck to Apple, just that Samsung prefers using ARMs GPU solution. Another thing I dislike is how everyone is comparing a GPU in the iPad 4 (SGX554MP4) that will NEVER arrive in a phone compared a Tegra 4 which will arrive in a phone. If you check OP link the benchmark was posted on the 3rd of January with different results (18fps then 33fps), so there is a chance it'll rival the iPad 4. I love Tegra as Nvidia is pushing developers to make more better games for Android compared to the 'geeks' *cough* who prefers benchmark results, whats the point of having a powerful GPU if the OEM isn't pushing developers to create enhance effect games for there chip.
Hamdir is correct about the GPUs, if Tegra 3 was around 50-80% faster than Tegra 2 with just 4 more cores, I can't really imagine it only being 2x faster than Tegra 3. Plus its a 28nm (at around 80mm2 just a bit bigger than Tegra 3, smaller than A6 90mm2) along with the dual memory than single on Tegra 2/3.
Click to expand...
Click to collapse
Firstly please keep it civil, don't go around saying that people know nothing, people's posts always speak volumes. Also calling people geeks, on XDA is that even an insult, next you're be asking what I deadlift:laugh:
My OP was done in the spirit of technical curiosity, and to counter the typical unrealistic expectations of a new product on mainstream sites, e.g. Nvidia will use Kepler tech (which was false), omg Kepler is like GTX 680, Tegra 4 will own the world, people forget that we are still talking about device that can only use a few watts, and must be passively cooled and not a 200+ watt, dual-fan GPU, even though they both now have to power similar resolutions, which is mental.
I both agree and disagree with your view on Nvidia's developer relationship, THD games do look nice, I compared Infinity Blade 2 on iOS vs Dead Trigger 2 on youtube, and Dead Trigger 2 just looked richer, more particle & physics effects, although IF Blade looked sharper at iPad 4 native resolution, one of the few titles to use the A6x's GPU fully.The downside to this relationship is the further fragmentation of the Android ecosystem, as Chainfire's app showed most of the extra effects can run on non Tegra devices.
Now, a 6 times increase in shader, does not automatically mean that games / benchmarks will scale in linear fashion, as other factors such as TMU /ROP throughput can bottleneck performance. Nvidia's Technical Marketing Manager, when interviewed at CES, said that the overall improvement in games / benchmarks will be around 3 to 4 times T3. Ultimately I hope to see Tegra 4 in a new Nexus 7, and if these benchmarks are proved accurate, it wouldn't stop me buying. Overall including the CPU, it would be a massive upgrade over the current N7, all in the space of a year.
At 50 seconds onwards.
https://www.youtube.com/watch?v=iC7A5AmTPi0
iOSecure said:
Typical "isheep" reference, unnecessary.
Why does apple have the advantage? Maybe because there semiconductor team is talented and can tie the A6X+PowerVR GPU efficiently. NIVIDA should have focused more on GPU in my opinion as the CPU was already good enough. With these tablets pushing excess of 250+ppi the graphics processor will play a huge role. They put 72 cores in there processor. Excellent. Will the chip ever be optimized to full potential? No. So again they demonstrated a product that sounds good on paper but real world performance might be a different story.
Click to expand...
Click to collapse
Sorry Steve, this is an Android forum, or where you too busy buffing the scratches out of your iPhone 5 to notice? I have full respect for the talents of Apple's engineers & marketing department, many of its users less so.
hamdir said:
tegra optimization porting no longer works using chainfire, this is now a myth
did u manage to try shadowgun thd, zombie driver or horn? the answer is no, games that use t3 sdk for physx and other cpu graphics works can not be forced to work on other devices, equally chainfire is now outdated and no longer updated
Click to expand...
Click to collapse
Looks like they haven't updated chain fire 3d for a while as a result only t3 games don't work but others do work rip tide gp, dead trigger etc . It's not a myth but it is outdated and only works with ics and tegra 2 compatible games . I think I (might be) unfortunate too but some gameloft games lagged on tegra device that i had, though root solved it too an extent
I am not saying something is superior to something just that my personal experience I might be wrong I may not be
Tbh I think benchmarks don't matter much unless you see some difference in real world usage and I had that problem with tegra in my experience
But we will have to see if the final version is able to push it above Mali t 604 and more importantly sgx544
Turbotab said:
Sorry Steve, this is an Android forum, or where you too busy buffing the scratches out of your iPhone 5 to notice? I have full respect for the talents of Apple's engineers & marketing department, many of its users less so.
Click to expand...
Click to collapse
No I actually own a nexus 4 and ipad mini so I'm pretty neutral in googles/apples ecosystems and not wiping any scratches off my devices.

Why do Qualcomm still focus on increasing core count instead of IPC improvements?

Hey guys.
Was recently reading up on Apple's new A9 processor and was slightly surprised to see it still used just 2 cores. However, the single-core performance of these bad boys is monstrous! And in my opinion, far more important than the multi-core scores when compared to 8-core processors from Qualcomm and Samsung. It's no secret that iOS generally does a lot of things much quicker (including generally better graphical performance) and also generally uses less power to do so than an Android device. I generally don't like Apple's products anymore, and this won't change anything. But they do things so darn well it's difficult to not give them credit where credit is due. The fact that the iPhone 6S has a battery size of about 1715mAH is insance when you think about Android phones with comparible battery lifes generally have batteries around 2500-2800mAH. That's a big difference. Is it largely down to iOS vs Android rather than the actual SoCs themselves? Or are Apple just amazing chip designers?
It reminds me of an AMD vs Intel debate, where AMD go for octa-core CPUs with super high clock speeds that tick all the boxes on paper, but then in reality are beaten out by even dual core offerings from Intel with lower or comparable clock speeds. Or when an 8MP camera on an iPhone beats out a 21MP camera on an Android phone. Time and time again manufacturers are more interested in the technical specs than the real life performance.
Any ideas?
Apple doesn't have to care about specs to success, and they had Jim Keller(designer of the Athlon 64 and the Apple Cyclone, and back at AMD designing Zen). Their IPC is impressive

Categories

Resources