How to buy a phone? - Android General

How do you buy your phone? You go through the specs? Did you ever feel like your specs (10 core processor, 2GHz, 4GB RAM or so) is not fast enough? Because you go through buying process all wrong. Yeah!
When buying you don't only look at the specs,
You take a look at SoC and then compare SoCs and GPUs on a specific website. Just like comparing Helio X20, a deca core processor with Snapdragon 820, a quad core processor. At a glance you might think MTK will kill the SD, on deep comparison you can find that Helio is using 32bit databus (so the 64bit arch is a joke since it isn't being fully used) while Snapdragon is full 64bit processor with 64bit databus
As far as I know, data bus matters more than anything. Because it is the thing that carries instructions and processed data to your other modules. A twice larger data bus means it will transfer data at twice rate and will be 2x faster
Now checking the board arch, X20 is still 20-28nm tech, while SD820 is 14nm arch. Smaller number means it will heat a lot lesser than other, with that said; you can increase frequency on SD with a huge margin without frying your components. Same goes with GPU, cores aren't all
So the RAM? Bigger is better? Nah, check it's bus too, 64bit RAM is twice the speed of 32bit. Also check the frequency of RAM (trust me you would not want a RAM that operates at the speed of SD card,
Camera, more MP is better? No
First off, check its min and max ISO, more range is better, check it's module's name, Sony IMX series is a killer. There are also numeric divisions in IMX series; bigger number's better. Check camera the same way you check SoC, that if it has OIS etc for aperture, lower value is better (higher technically since it is the denominator, that is f/1.9 is better than f/2.0
About the display, never come under 380dpi, that would shatter your pixels. And the battery life, more ampere rating doesn't mean it would last more, 5.5" 1080p 3000mAh would last lesser than 5" 720p 2200mAh (ideal conditions)
Now that isn't the only thing, OS matters too, Bluetooth and GPS modules matter too, LP modules are best for power saving
Check website: http://system-on-a-chip.specout.com
Sorry if my English skills aren't up to the mark

Related

Which Processor is faster & better

"Intel Bulverde 520 MHz"
The one in the Universal
OR
"Qualcomm MSM7201A 528 Mhz"
in the new HTC HD unit
I feel they are the same. Am I right?
qualcomm is much better
Its similar the difference between a 2.5ghz Pentium 4 and a 2.5ghz Core2Solo
i don't think that core2solo and pentium4 with ht much differ
l2tp said:
i don't think that core2solo and pentium4 with ht much differ
Click to expand...
Click to collapse
Google up "Instructions per second" and you'll understand.
The Netburst architec of P4 is one of the worst example in history of it. A failure by engineering standard.
The PXA270 Processor in the Universal actually runs at 624mhz and is underclocked. The HTC X7500 uses the same CPU running at 624mhz. It is clearly the better CPU.
genetik_freak said:
The PXA270 Processor in the Universal actually runs at 624mhz and is underclocked. The HTC X7500 uses the same CPU running at 624mhz. It is clearly the better CPU.
Click to expand...
Click to collapse
Very, very wrong.
I wouldn't say that the Intel two processors are exactly the same, with one just being underclocked via software. Notice how intel puts out multiple pentiums of a given generation at different speeds? Would you venture to say that all those chips are the same too?
Also, clock speed is a poor metric when comparing chips from different companies. PDADB.Net says that the Intel chip has a ARMv5TE instruction set and the Qualcom chip has a ARMv6 instruction set. The Intel is a generation behind.
Comparing
Wikipedia says
Main article: Megahertz myth
The clock rate of a computer is only useful for providing comparisons between computer chips in the same processor family. An IBM PC with an Intel 486 CPU running at 50 MHz will be about twice as fast as one with the same CPU, memory and display running at 25 MHz, while the same will not be true for MIPS R4000 running at the same clock rate as the two are different processors with different functionality. Furthermore, there are many other factors to consider when comparing the speeds of entire computers, like the clock rate of the computer's front side bus (FSB), the clock rate of the RAM, the width in bits of the CPU's bus and the amount of Level 1, Level 2 and Level 3 cache.
Clock rates should not be used when comparing different computers or different processor families. Rather, some software benchmark should be used. Clock rates can be very misleading since the amount of work different computer chips can do in one cycle varies. For example, RISC CPUs tend to have simpler instructions than CISC CPUs (but higher clock rates), and superscalar processors can execute more than one instruction per cycle (on average), yet it is not uncommon for them to do "less" in a clock cycle. In addition, subscalar CPUs or use of parallelism can also affect the quality of the computer regardless of clock rate.
Click to expand...
Click to collapse
Sonus you are correct about the Mhz comparison. However, the PXA270 in the Universal can be safely "overclocked" to 624Mhz because the chip is designed to max out at that speed.
I would still like to see some benchmark tests between the 624Mhz PXA270, and the 528Mhz Qualcomm MSM7201A.
Generations aside, I can't see the Qualcomm chip outperforming the Intel Chip by much, if any. Also, it should be noted that the PXA270 can be scaled, not sure if that is true for the MSM7201A.
The other catch phrase is "Performance per watt". I bet the MSM7201A has a huge advantage over PXA27x in that, mainly due to newer manufacturing process.
That may be true wuzy, but considering the PXA270 is almost 5 years old and still being used in new devices should tell you plenty about its capabilities and performance.
Not really... It does, however tell a lot about the stinginess of device manufacturers.
As for the overclocking, not every Universal can run 624 MHz without crashing because the CPUs are going through a selection process after manufacturing and there is simply no reason to use the best ones for a device that doesn't need them running at full speed.
The crashes are usually the result of the type of program used to overclock and also the rom. For the most part, people have found that 624mhz is pretty stable, inlcuding myself. Some have even pushed it beyond that speed, but that's another story...
Also take this into consideration:
The Universal has been on the market since 2005, almost 4 years now. By industry standards, it should be obsolete. Why is it not then? Simply, it is quite inexpensive compared to the newer devices having similar features, sometimes less. When it comes to performance vs. price vs. features, you just cannot beat the value of the Universal and its blistering fast 520/624mhz PXA270 CPU! The PXA270's performance is only rivaled by its bigger brother, the 800Mhz PXA320 which has made its way into some newer devices already.
genetik_freak said:
That may be true wuzy, but considering the PXA270 is almost 5 years old and still being used in new devices should tell you plenty about its capabilities and performance.
Click to expand...
Click to collapse
Try out a Diamond/Touch Pro with Opera9.5 the next time you see one and notice the speed difference.
On MSM7201A compared to our PXA27x it's a lot more smoother.
The lack of driver for MSM7200 on a lot of devices released last year tainted our perception on the new generation chips I think.
Touch HD vs. ASUS Galaxy7 at end of the year... hmmm
I think you're missing the point wuzy.
I know there are newer devices out now that can deliver slightly better performance in some areas than the Universal, but considering how old our device is, it is to be expected. All I'm saying is that given the age of the Universal compared to what's out there now, The Universal has held up well. Furthermore, with all the new cooked roms popping up, you can expect the Uni to live even longer!
Take a look at H.264 decompression and real high performance tasks and the PXA270 looses so badly against the PXA320 that it is not even funny anymore...
Why does the Uni keep up with most software? Because most programs are written for the old ARMv4 instruction set, thus wasting a lot of CPU cycles on newer processors that have already moved on. Apart from that the average application simply does not need that much CPU power to begin with.
The Uni held out well in a market that is very slow to adapt new technologies to begin with. The Axim x50v had a dedicated graphics chip at the end of 2004 - how many applications make use of that today? Only some games (ports, emulators) and media players. For those alone the Axim has held out better than the Uni though as it is still one of the best performing PPCs on the market.
Our little one will be around for quite a while, but it is far, far away from what nowadays devices can offer and it shows if you run anything beyond mail and office apps on it.
Which Processor is faster & better
I feel from your input above that "Qualcomm MSM7201A 528 Mhz" has higher performance, clock rete, Instructions per second, & Performance per watt when compared to the "Intel Bulverde 520 MHz" about 2:1 am I right ?
Another Question:
What is the highest speed Processor available for the PDA industry today?
Best Regards.
IMHO the ARM Cortex processors are very far up the ladder when it comes to performance and energy consumption. The Pandora makers claim 10 hours of runtime for their device. Together with its media chip this little bugger is capable of decoding 720p HD video streams (take a look at the Archos 5)
I am not sure if the MSM7201A chipset's CPU alone reaches twice the performance of the Uni, but you will see a huge difference in apps that support and need the latest in CPU architecture (media players & games). If (one way or the other) the 3D capabilities can be put to use you will probably see more than a 2:1 performance boost.
The sad truth is the Universal is one of the slowest VGA devices around. Especially considering lack of the graphical accelerator (which was even present in prototypes).
Too bad the dedicated 3D chip didn't make it into the final design. But it's still better than having a 3D accelerator without drivers! I have a Sharp EM-ONE here with a GoForce 5500 that could theoretically accelerate many video formats. The sad truth is that because there are no drivers no media player can make use of the chip. Even worse: Because the graphics chip still controls the display video is even slower because the optimized X-Scale drivers can't be used. It's like Sharp and NVidia wanted to punish users double So, as bad as it is, the Uni is not the worst device out there!
x86
I wonder why there´re no x86 cpu´s placed in mobile devices yet. maybe because of the high power consumption? x86 cpu´s running at 528mhz would be more powerful than arm cpu´s. furthermore the device could run x86 os like xp embedded with more features and capabilities...
x86 CPU enabled systems are still too much power hungry and too much complicated to be used in such a small device (sounds weird when talking abut HTC Universal, doesn't it).

A good OMAP 3640 vs snapdragon vs humming bird article

CPU performance from the new TI OMAP 3640 (yes, they’re wrong again, its 3640 for the 1 GHz SoC, 3630 is the 720 MHz one) is surprisingly good on Quadrant, the benchmarking tool that Taylor is using. In fact, as you can see from the Shadow benchmarks in the first article, it is shown outperforming the Galaxy S, which initially led me to believe that it was running Android 2.2 (which you may know can easily triple CPU performance). However, I’ve been assured that this is not the case, and the 3rd article seems to indicate as such, given that those benchmarks were obtained using a Droid 2 running 2.1.
Now, the OMAP 3600 series is simply a 45 nm version of the 3400 series we see in the original Droid, upclocked accordingly due to the reduced heat and improved efficiency of the smaller feature size.
If you need convincing, see TI’s own documentation: http://focus.ti.com/pdfs/wtbu/omap3_pb_swpt024b.pdf
So essentially the OMAP 3640 is the same CPU as what is contained in the original Droid but clocked up to 1 GHz. Why then is it benchmarking nearly twice as fast clock-for-clock (resulting in a nearly 4x improvement), even when still running 2.1? My guess is that the answer lies in memory bandwidth, and that evidence exists within some of the results from the graphics benchmarks.
We can see from the 3rd article that the Droid 2’s GPU performs almost twice as fast as the one in the original Droid. We know that the GPU in both devices are the same model, a PowerVR SGX 530, except that the Droid 2’s SGX 530 is, as is the rest of the SoC, on the 45 nm feature size. This means that it can be clocked considerably faster. It would be easy to assume that this is reason for the doubled performance, but that’s not necessarily the case. The original Droid’s SGX 530 runs at 110 MHz, substantially less than its standard clock speed of 200 MHz. This downclocking is likely due to the memory bandwidth limitations I discussed in my Hummingbird vs Snapdragon article, where the Droid original was running LPDDR1 memory at a fairly low bandwidth that didn’t allow for the GPU to function at stock speed. If those limitations were removed by adding LPDDR2 memory, the GPU could then be upclocked again (likely to around 200 MHz) to draw even with the new memory bandwidth limit, which is probably just about twice what it was with LPDDR1.
So what does this have to do with CPU performance? Well, it’s possible that the CPU was also being limited by LPDDR1 memory, and that the 65 nm Snapdragons that are also tied down to LPDDR1 memory share the same problem. The faster LPDDR2 memory could allow for much faster performance.
Lastly, since we know from the second article at the top that the Galaxy S performs so well with its GPU, why is it lacking in CPU performance, only barely edging past the 1 GHz Snapdragon?
It could be that the answer lies in the secret that Samsung is using to achieve those ridiculously fast GPU speeds. Even with LPDDR2 memory, I can’t see any way that the GPU could achieve 90 Mtps; the required memory bandwidth is too high. One possibility is the addition of a dedicated high-speed GPU memory cache, allowing the GPU access to memory tailored to handle its high-bandwidth needs. With this solution to memory bandwidth issues, Samsung may have decided that higher speed memory was unnecessary, and stuck with a slower solution that remains limited in the same manner as the current-gen Snapdragon.
Lets recap: TI probably dealt with the limitations to its GPU by dropping in higher speed system RAM, thus boosting overall system bandwidth to nearly double GPU and CPU performance together.
Samsung may have dealt with limitations to the GPU by adding dedicated video memory that boosted GPU performance several times, but leaving CPU performance unaffected.
This, I think, is the best explanation to what I’ve seen so far. It’s very possible that I’m entirely wrong and something else is at play here, but that’s what I’ve got.
Click to expand...
Click to collapse
CPU Performance
Before I go into details on the Cortex-A8, Snapdragon, Hummingbird, and Cortex-A9, I should probably briefly explain how some ARM SoC manufacturers take different paths when developing their own products. ARM is the company that owns licenses for the technology behind all of these SoCs. They offer manufacturers a license to an ARM instruction set that a processor can use, and they also offer a license to a specific CPU architecture.
Most manufacturers will purchase the CPU architecture license, design a SoC around it, and modify it to fit their own needs or goals. T.I. and Samsung are examples of these; the S5PC100 (in the iPhone 3GS) as well as the OMAP3430 (in the Droid) and even the Hummingbird S5PC110 in the Samsung Galaxy S are all SoCs with Cortex-A8 cores that have been tweaked (or “hardened”) for performance gains to be competitive in one way or another. Companies like Qualcomm however will build their own custom processor architecture around a license to an instruction set that they’ve chosen to purchase from ARM. This is what the Snapdragon’s Scorpion processor is, a completely custom implementation that shares some similarities with Cortex-A8 and uses the same ARMv7 instruction set, but breaks away from some of the limitations that the Cortex-A8 may impose.
Qualcomm’s approach is significantly more costly and time consuming, but has the potential to create a processor that outperforms the competition. Through its own custom architecture configuration, (which Qualcomm understandably does not go into much detail regarding), the Scorpion CPU inside the Snapdragon SoC gains an approximate 5% improvement in instructions per clock cycle over an ARM Cortex-A8. Qualcomm appeals to manufacturers as well by integrating features such as GPS and cell network support into the SoC to reduce the need of a cell phone manufacturer having to add additional hardware onto the phone. This allows for a more compact phone design, or room for additional features, which is always an attractive option. Upcoming Snapdragon SoCs such as the QSD8672 will allow for dual-core processors (not supported by Cortex-A8 architecture) to boost processing power as well as providing further ability to scale performance appropriately to meet power needs. Qualcomm claims that we’ll see these chips in the latter half of 2010, and rumor has it that we’ll begin seeing them show up first in Windows Mobile 7 Series phones in the Fall. Before then, we may see a 45 nm version of the QSD8650 dubbed “QSD8650A” released in the Summer, running at 1.3 GHz.
You might think that the Hummingbird doesn’t stand a chance against Qualcomm’s custom-built monster, but Samsung isn’t prepared to throw in the towel. In response to Snapdragon, they hired Intrinsity, a semiconductor company specializing in tweaking processor logic design, to customize the Cortex-A8 in the Hummingbird to perform certain binary functions using significantly less instructions than normal. Samsung estimates that 20% of the Hummingbird’s functions are affected, and of those, on average 25-50% less instructions are needed to complete each task. Overall, the processor can perform tasks 5-10% more quickly while handling the same 2 instructions per clock cycle as an unmodified ARM Cortex-A8 processor, and Samsung states it outperforms all other processors on the market (a statement seemingly aimed at Qualcomm). Many speculate that it’s likely that the S5PC110 CPU in the Hummingbird will be in the iPhone HD, and that its sister chip, the S5PV210, is inside the Apple A4 that powers the iPad. (UPDATE: Indications are that the model # of the SoC in the Apple iPad’s A4 is “S5L8930”, a Samsung part # that is very likely closely related to the S5PV210 and Hummingbird. I report and speculate upon this here.)
Lastly, we really should touch upon Cortex-A9. It is ARM’s next-generation processor architecture that continues to work on top of the tried-and-true ARMv7 instruction set. Cortex-A9 stresses production on the 45 nm scale as well as supporting multiple processing cores for processing power and efficiency. Changes in core architecture also allow a 25% improvement in instructions that can be handled per clock cycle, meaning a 1 GHz Cortex-A9 will perform considerably quicker than a 1 GHz Cortex-A8 (or even Snapdragon) equivalent. Other architecture improvements such as support for out-of-order instruction handling (which, it should be pointed out, the Snapdragon partially supports) will allow the processor to have significant gains in performance per clock cycle by allowing the processor to prioritize calculations based upon the availability of data. T.I. has predicted its Cortex-A9 OMAP4440 to hit the market in late 2010 or early 2011, and promises us that their OMAP4 series will offer dramatic improvements over any Cortex-A8-based designs available today.
GPU performance
There are a couple problems with comparing GPU performance that some recent popular articles have neglected to address. (Yes, that’s you, AndroidAndMe.com, and I won’t even go into a rant about bad data). The drivers running the GPU, the OS platform it’s running on, memory bandwidth limitations as well as the software itself can all play into how well a GPU runs on a device. In short: you could take identical GPUs, place them in different phones, clock them at the same speeds, and see significantly different performance between them.
For example, let’s take a look at the iPhone 3GS. It’s commonly rumored to contain a PowerVR SGX 535, which is capable of processing 28 million triangles per second (Mt/s). There’s a driver file on the phone that contains “SGX535” in the filename, but that shouldn’t be taken as proof as to what it actually contains. In fact, GLBenchmark.com shows the iPhone 3GS putting out approximately 7 Mt/s in its graphics benchmarks. This initially led me to believe that the iPhone 3GS actually contained a PowerVR SGX 520 @ 200 MHz (which incidentally can output 7 Mt/s) or alternatively a PowerVR SGX 530 @ 100 MHz because the SGX 530 has 2 rendering pipelines instead of the 1 in the SGX 520, and tends to perform about twice as well. Now, interestingly enough, Samsung S5PC100 documentation shows the 3D engine as being able to put out 10 Mt/s, which seemed to support my theory that the device does not contain an SGX 535.
However, the GPU model and clock speed aren’t the only limiting factors when it comes to GPU performance. The SGX 535 for example can only put out its 28 Mt/s when used in conjunction with a device that supports the full 4.2 GB per second of memory bandwidth it needs to operate at this speed. Assume that the iPhone 3GS uses single-channel LPDDR1 memory operating at 200 MHz on a 32-bit bus (which is fairly likely). This allows for 1.6 GB/s of memory bandwidth, which is approximately 38% of what the SGX 535 needs to operate at its peak speed. Interestingly enough, 38% of 28 Mt/s equals just over 10 Mt/s… supporting Samsung’s claim (with real-world performance at 7 Mt/s being quite reasonable). While it still isn’t proof that the iPhone 3GS uses an SGX 535, it does demonstrate just how limiting single-channel memory (particularly slower memory like LPDDR1) can be and shows that the GPU in the iPhone 3GS is likely a powerful device that cannot be used to its full potential. The GPU in the Droid likely has the same memory bandwidth issues, and the SGX 530 in the OMAP3430 appears to be down-clocked to stay within those limitations.
But let’s move on to what’s really important; the graphics processing power of the Hummingbird in the Samsung Galaxy S versus the Snapdragon in the EVO 4G. It’s quickly apparent that Samsung is claiming performance approximately 4x greater than the 22 Mt/s the Snapdragon QSD8650’s can manage. It’s been rumored that the Hummingbird contains a PowerVR SGX 540, but at 200 MHz the SGX 540 puts out 28 Mt/s, approximately 1/3 of the 90 Mt/s that Samsung is claiming. Either Samsung has decided to clock an SGX 540 at 600 MHz, which seems rather high given reports that the chip is capable of speeds of “400 MHz+” or they’ve chosen to include a multi-core PowerVR SGX XT solution. Essentially this would allow 3 PowerVR cores (or 2 up-clocked ones) to hit the 90 Mt/s mark without having to push the GPU past 400 MHz.
Unfortunately however, this brings us right back to the memory bandwidth limitation argument again, because while the Hummingbird likely uses LPDDR2 memory, it still only appears to have single-channel memory controller support (capping memory bandwidth off at 4.2 GB/s), and the question is raised as to how the PowerVR GPU obtains the large amount of memory bandwidth it needs to draw and texture polygons at those high speeds. If the PowerVR SGX 540 (which, like the SGX 535 performs at 28 Mt/s at 200 MHz) requires 4.2 GB/s of memory bandwidth, drawing 90 Mt/s would require over 12.6 GB/s of memory bandwidth, 3 times what is available. Samsung may be citing purely theoretical numbers or using another solution such as possibly increasing GPU cache sizes. This would allow for higher peak speeds, but it’s questionable if it could achieve sustainable 90 Mt/s performance.
Qualcomm differentiates itself from most of the competition (once again) by using its own graphics processing solution. The company bought AMD’s Imageon mobile-graphics division in 2008, and used AMD’s Imageon Z430 (now rebranded Adreno 200) to power the graphics in the 65 nm Snapdragons. The 45 nm QSD8650A will include an Adreno 205, which will provide some performance enhancements to 2D graphics processing as well as hardware support for Adobe Flash. It is speculated that the dual-core Snapdragons will utilize the significantly more powerful Imageon Z460 (or Adreno 220), which apparently rivals the graphics processing performance of high-end mobile gaming systems such as the Sony PlayStation Portable. Qualcomm is claiming nearly the same performance (80 Mt/s) as the Samsung Hummingbird in its upcoming 45 nm dual-core QSD8672, and while LPDDR2 support and a dual-channel memory controller are likely, it seems pretty apparent that, like Samsung, something else must be at play for them to achieve those claims.
While Samsung and Qualcomm tend to stay relatively quiet about how they achieve their graphics performance, T.I. has come out and specifically stated that its upcoming OMAP4440 SoC supports both LPDDR2 and a dual-channel memory controller paired with a PowerVR SGX 540 chip to provide “up to 2x” the performance of its OMAP3 line. This is a reasonable claim assuming the SGX 540 is clocked to 400 MHz and requires a bandwidth of 8.5 GB/s which can be achieved using LPDDR2 at 533 MHz in conjunction with the dual-channel controller. This comparatively docile graphics performance may be due to T.I’s rather straightforward approach to the ARM Cortex-A9 configuration.
Power Efficiency
Moving onward, it’s also easily noticeable that the next generation chipsets on the 45 nm scale are going to be a significant improvement in terms of performance and power efficiency. The Hummingbird in the Samsung Galaxy S demonstrates this potential, but unfortunately we still lack the power consumption numbers we really need to understand how well it stacks up against the 65 nm Snapdragon in the EVO 4G. It can be safely assumed that the Galaxy S will have overall better battery life than the EVO 4G given the lower power requirements of the 45 nm chip, the more power-efficient Super AMOLED display, as well as the fact that both phones sport equal-capacity 1500mA batteries. However it should be noted that the upcoming 45 nm dual-core Snapdragon is claimed to be coming with a 30% decrease in power needs, which would allow the 1.5 GHz SoC to run at nearly the same power draw of the current 1 GHz Snapdragon. Cortex-A9 also boasts numerous improvements in efficiency, claiming power consumption numbers nearly half that of the Cortex-A8, as well as the ability to use multiple-core technology to scale processing power in accordance with energy limitations.
While it’s almost universally agreed that power efficiency is a priority for these processors, many criticize the amount of processing power these new chips are bringing to mobile devices, and ask why so much performance is necessary. Whether or not mobile applications actually need this much power is not really the concern however; improved processing and graphics performance with little to no additional increase in energy needs will allow future phones to actually be much more efficient in terms of power. This is because ultimately, power efficiency relies in a big part on the ability of the hardware in the phone to complete a task quickly and return to an idle state where it consumes very little power. This “burst” processing, while consuming fairly high amounts of power for very short periods of time, tends to be more economical than prolonged, slower processing. So as long as ARM chipset manufacturers can continue to crank up the performance while keeping power requirements low, there’s nothing but gains to be had.
Click to expand...
Click to collapse
http://alienbabeltech.com/main/?p=19309
http://alienbabeltech.com/main/?p=17125
its a good read for noobs like me, also read the comments as there is lots of constructive criticism [that actually adds to the information in the article]
Kind of wild to come across people quoting me when I'm just Googling the web for more info.
I'd just like to point out that I was probably wrong on the entire first part about the 3640. I can't post links yet, but Google "Android phones benchmarked; it's official, the Galaxy S is the fastest." for my blog article on why.
And the reason I'm out here poking around for more information is because AnandTech.com (well known for their accurate and detailed articles) just repeatedly described the SoC in the Droid X as a OMAP 3630 instead of the 3640.
EDIT - I've just found a blog on TI's website that calls it a 3630. I guess that's that! I need to find a TI engineer to make friends with for some inside info.
Anyhow, thanks for linking my work!
Make no mistake, OMAP 3xxx series get left in the dust by the Hummingbird.
Also, I wouldn't really say that Samsung hired Intrinsity to make the CPU - they worked together. Intrinsity is owned by Apple, the Hummingbird is the same core as the A4, but with a faster graphics processor - the PowerVR SGX 540.
There was a bug in the Galaxy S unit they tested, which was later confirmed in the authors own comments later on.

Exynos 5 Octa and Snapdragon 800

Does anyone else think that the new-generation Exynos SoC will support 802.11ac and LTE-A? Or playing back 1080p video at 60 fps and 2k quality at 30 fps? These are features which were never really discussed about the chipset itself.
The Snapdragon 800 was confirmed to have compatibility and capability of all of the aforementioned. It sounds as if the Snapdragon 800 series will be the superior chipset, while the Exynos Octa will likely provide better power efficiency in some regard. It would be pretty disappointing if the Galaxy S IV got stuck with a Snapdragon 600 processor, given the date it's likely going to be pushed out on. It might make me consider the Note this time around.
i really hope all these rumors are fake, samsung should use Exynos on there flagship Galaxy S line ! if not the octa, maybe the Exynos 5 Quad Core 1.8-2.0GHz !
All the Snapdragon 600 happens to be is a mid-tear SoC, which improves upon the same GPU and performance of the S4 Pro. Real A15 architectures should blow this chipset out of the water. People seem to think that what they see now is good. But when the Snapdragon 800 and other A15-based chips start making their debut, this will feel dated quickly in the coming months.
megagodx said:
All the Snapdragon 600 happens to be is a mid-tear SoC, which improves upon the same GPU and performance of the S4 Pro. Real A15 architectures should blow this chipset out of the water. People seem to think that what they see now is good. But when the Snapdragon 800 and other A15-based chips start making their debut, this will feel dated quickly in the coming months.
Click to expand...
Click to collapse
clock-per-clock a15 is just 15% faster than krait, dont think that there's so much differences between the two.
they are both really solid performers and the batle is all on the maximum clock/power required rateo.
The SD800 will also feature Quick Charge 2.0, which is supposed to charge your battery 75% faster than other SoC chipsets without that same function. SD600 doesn't feature that either. I'm pretty sure if you seen the initial Tegra 4 benchmarks (based off of real A15 architecture) - they wipe the floor with the HTC One's SD600. Being 75% increased in performance over the Snapdragon S4 Pro (last year's best mobile SoC), the SD800 should bring comparatively the same or better results than the T4 mentioned. That's kind of going to be a disappointment if the S IV ends up with a SD600 and no Exynos 5 Quad/Octa, at least.

Adreno 418 gpu turn off

Why did they go with adreno gpu when nexus 6 that came out 1 year ago has adreno 420 already
Will you really be able to tell the diference? I doubt it. Its just a number game really
Sent from my GT-I9300 using Tapatalk
They had no choice. They could choose the 805, the 808, or the 810. If they chose the 805, everyone would complain that it's a processor from 2014. If they chose the 810, everyone would complain that it will overheat and get crappy battery life. The 808 is the best choice for the least number of complaints. Yeah, it has a slightly slower GPU than the 805, but the CPU is much faster than the 805, and even faster than the 810 in demanding situations because the 810 will completely turn off its BIG cores if it gets too warm, whereas the 808 doesn't get hot enough that it needs to turn off the BIG cores and switch to little.
Geordie Affy said:
Will you really be able to tell the diference? I doubt it. Its just a number game really
Sent from my GT-I9300 using Tapatalk
Click to expand...
Click to collapse
Cool story. If I use that logic my old lg g2 should be enough.
Sent from my LG-D800
gtg465x said:
They had no choice. They could choose the 805, the 808, or the 810. If they chose the 805, everyone would complain that it's a processor from 2014. If they chose the 810, everyone would complain that it will overheat and get crappy battery life. The 808 is the best choice for the least number of complaints. Yeah, it has a slightly slower GPU than the 805, but the CPU is much faster than the 805, and even faster than the 810 in demanding situations because the 810 will completely turn off its BIG cores if it gets too warm, whereas the 808 doesn't get hot enough that it needs to turn off the BIG cores and switch to little.
Click to expand...
Click to collapse
Yeah but it sucks that the whole android ecosystem has to depend on qualcomm. Imagine if next year they screw up again... It seems like samsung cpu rock this year and apple too..
Sent from my LG-D800
ambervals6 said:
Cool story. If I use that logic my old lg g2 should be enough.
Sent from my LG-D800
Click to expand...
Click to collapse
My point exactly lol. Whatever phone you buy it will be an upgrade in some way ... all this numbers game is becoming a tad OTT.
Sent from my GT-I9300 using Tapatalk
ambervals6 said:
Yeah but it sucks that the whole android ecosystem has to depend on qualcomm. Imagine if next year they screw up again... It seems like samsung cpu rock this year and apple too..
Sent from my LG-D800
Click to expand...
Click to collapse
Yep, it does suck. And it is a shame that Qualcomm could have made a great SOC instead of two meh ones. If they were smart, they would have put the Adreno 430 GPU in the 808 and marketed it as their flagship phone SOC, and marketed the 810 as a tablet only SOC, because tablets can better dissipate the heat. But none of that is Motorola's fault. I think Motorola chose wisely between the not so great choices they had.
Sent from my Nexus 6 using XDA Forums Pro.
gtg465x said:
Yep, it does suck. And it is a shame that Qualcomm could have made a great SOC instead of two meh ones. If they were smart, they would have put the Adreno 430 GPU in the 808 and marketed it as their flagship phone SOC, and marketed the 810 as a tablet only SOC, because tablets can better dissipate the heat. But none of that is Motorola's fault. I think Motorola chose wisely between the not so great choices they had.
Sent from my Nexus 6 using XDA Forums Pro.
Click to expand...
Click to collapse
Well as long as there is no serious competition out there, qualcomm will continue not to give a single **** and unfortunately upgrades will come in lame increments.
Sent from my LG-D800
I think it's funny how all the new 810 soc have the cores down clocked to 1.8ghz.
Sent from my HTC6525LVW using Tapatalk
ambervals6 said:
Well as long as there is no serious competition out there, qualcomm will continue not to give a single **** and unfortunately upgrades will come in lame increments.
Sent from my LG-D800
Click to expand...
Click to collapse
Competition is coming. Qualcomm should be worried. http://www.androidpolice.com/2015/0...qualcomm-begun-a-long-slow-fall-from-the-top/
gtg465x said:
They had no choice. They could choose the 805, the 808, or the 810. If they chose the 805, everyone would complain that it's a processor from 2014. If they chose the 810, everyone would complain that it will overheat and get crappy battery life. The 808 is the best choice for the least number of complaints. Yeah, it has a slightly slower GPU than the 805, but the CPU is much faster than the 805, and even faster than the 810 in demanding situations because the 810 will completely turn off its BIG cores if it gets too warm, whereas the 808 doesn't get hot enough that it needs to turn off the BIG cores and switch to little.
Click to expand...
Click to collapse
this isn't the full story + its a little misleading. here are the technical details:
the 418 is as good, if not better than the 420 for the following reasons:
1. The 418 has the same "system specs" as the 420, minus the down-throttling.
2. The 418 was fabbed on smaller architecture (20nm) vs. the 420 (28nm). This means greater power savings and less heat.
3. The 418/420 is to the 430 like the NVIDIA 960 is to the 980 GTX, but you wont get the 430 unless you get the 810.
Source: https://en.wikipedia.org/wiki/Adreno#Variants
640k said:
this isn't the full story + its a little misleading. here are the technical details:
the 418 is as good, if not better than the 420 for the following reasons:
1. The 418 has the same "system specs" as the 420, minus the down-throttling.
2. The 418 was fabbed on smaller architecture (20nm) vs. the 420 (28nm). This means greater power savings and less heat.
3. The 418/420 is to the 430 like the NVIDIA 960 is to the 980 GTX, but you wont get the 430 unless you get the 810.
Source: https://en.wikipedia.org/wiki/Adreno#Variants
Click to expand...
Click to collapse
I'm not sure I trust that Wikipedia article. There are no references cited for the 418 information. Looking at Anandtech, the Adreno 418 is slower in EVERY graphics benchmark than the Adreno 420, even though it has the advantage of being paired with a faster CPU.
Here's a quote from Anandtech: "In GFXBench, we can see that the Adreno 418 GPU is a definite step up from the Adreno 330 in the Snapdragon 801, but not quite at the level of the Snapdragon 805's Adreno 420."
Look at the benchmarks for yourself here. The Nexus 6 and Note 4 (SD 805 / Adreno 420) both beat the LG G4 (SD 808 / Adreno 418) in every single graphics and gaming test performed. http://www.anandtech.com/show/9379/the-lg-g4-review/7
So I think it's safe to say the 420 is a little better than the 418. I don't think they would have named it the 418 if it was just a die shrunk 420. Usually a die shrink allows for faster clock speeds, and if a die shrink was the only difference, you would expect the 418 to match the performance of the 420, or even surpass it because the clock speed could go higher. That isn't the case, so I think there are some architectural differences as well that aren't shown in the Wiki article. I think Qualcomm naming it the 418 instead of the 422 even though it's newer is a pretty good indication that Qualcomm knows it isn't as good as the 420.
gtg465x said:
I'm not sure I trust that Wikipedia article. There are no references cited for the 418 information. Looking at Anandtech, the Adreno 418 is slower in EVERY graphics benchmark than the Adreno 420, even though it has the advantage of being paired with a faster CPU.
Here's a quote from Anandtech: "In GFXBench, we can see that the Adreno 418 GPU is a definite step up from the Adreno 330 in the Snapdragon 801, but not quite at the level of the Snapdragon 805's Adreno 420."
Look at the benchmarks for yourself here. The Nexus 6 and Note 4 (SD 805 / Adreno 420) both beat the LG G4 (SD 808 / Adreno 418) in every single graphics and gaming test performed. http://www.anandtech.com/show/9379/the-lg-g4-review/7
So I think it's safe to say the 420 is a little better than the 418. I don't think they would have named it the 418 if it was just a die shrunk 420. Usually a die shrink allows for faster clock speeds, and if a die shrink was the only difference, you would expect the 418 to match the performance of the 420, or even surpass it because the clock speed could go higher. That isn't the case, so I think there are some architectural differences as well that aren't shown in the Wiki article. I think Qualcomm naming it the 418 instead of the 422 even though it's newer is a pretty good indication that Qualcomm knows it isn't as good as the 420.
Click to expand...
Click to collapse
Did anyone else notice how high the 2014 moto x was in those benchmarks. Motorola must really optimize the kernel.
Sent from my HTC6525LVW using Tapatalk
Positive spin time!
The 808's gpu handles games fine and consumes less power than the 7420's gpu (S6 & Note 5). I would rather have a GPU that handles games as is, rather than drains more battery and prefer a more power economical GPU for a portable device. There is a reason you see a lot of complaints about the S6 battery life and others do not. Most correlates to those that use apps that are GPU heavy.
rushless said:
Positive spin time!
The 808's gpu handles games fine and consumes less power than the 7420's gpu (S6 & Note 5). I would rather have a GPU that handles games as is, rather than drains more battery and prefer a more power economical GPU for a portable device. There is a reason you see a lot of complaints about the S6 battery life and others do not. Most correlates to those that use apps that are GPU heavy.
Click to expand...
Click to collapse
Lmao this guy
Sent from my A0001
What was comedic besides my awareness it is spin? True that games perform fine on the 808 and the 7420 gpu consumes more power. As far as bigger fancier games that need even more power, not very practical on a portable device so kind of moot with a small battery.
Sent from my SM-N910V using Tapatalk
If it's a concern you should wait for the nexus to drop with its rumored snapdragon 820 and next gen adreno.
Also for the issue of this year's qcom products sucking, remember that market pressure forced them to release chips with generic ARM cores because their in-house 64 bit designs weren't ready. The 820 ditches the octocore big.LITTLE architecture for a quad core qcom design. Lots to look forward to.
And I think the 808 is probably the best chip they could have picked for the X this year.
ambervals6 said:
Cool story. If I use that logic my old lg g2 should be enough.
Sent from my LG-D800
Click to expand...
Click to collapse
It is, but you're all too spoiled to make it out
SchmidtA99 said:
I think it's funny how all the new 810 soc have the cores down clocked to 1.8ghz.
Sent from my HTC6525LVW using Tapatalk
Click to expand...
Click to collapse
I think 810 is way better than 808. Adreno 430 vs 418. The 430 is WAY BETTER. And if the 810 gets too hot, you can always turn off 2 high performance cores. But you can never have an adreno 430 in the 808
gtg465x said:
I'm not sure I trust that Wikipedia article. There are no references cited for the 418 information. Looking at Anandtech, the Adreno 418 is slower in EVERY graphics benchmark than the Adreno 420, even though it has the advantage of being paired with a faster CPU.
Here's a quote from Anandtech: "In GFXBench, we can see that the Adreno 418 GPU is a definite step up from the Adreno 330 in the Snapdragon 801, but not quite at the level of the Snapdragon 805's Adreno 420."
Look at the benchmarks for yourself here. The Nexus 6 and Note 4 (SD 805 / Adreno 420) both beat the LG G4 (SD 808 / Adreno 418) in every single graphics and gaming test performed. http://www.anandtech.com/show/9379/the-lg-g4-review/7
So I think it's safe to say the 420 is a little better than the 418. I don't think they would have named it the 418 if it was just a die shrunk 420. Usually a die shrink allows for faster clock speeds, and if a die shrink was the only difference, you would expect the 418 to match the performance of the 420, or even surpass it because the clock speed could go higher. That isn't the case, so I think there are some architectural differences as well that aren't shown in the Wiki article. I think Qualcomm naming it the 418 instead of the 422 even though it's newer is a pretty good indication that Qualcomm knows it isn't as good as the 420.
Click to expand...
Click to collapse
It's slower because Qualcomm halved the memory bus from 128-bit to 64-bit. The S810/A430 has the same bandwidth as the S805 because they doubled the speed of the RAM. So, 128-bit LPDDR3-800 (1600MHz effective) is equal to LPDDR4-1600 (3200MHz effective): 25.6 GB/s
Unfortunately, Qualcomm limited the S808 to LPDDR3-933 (1866MHz effective): 14.9 GB/s
The 418 and 420 are the same GPU, architecturally. The 418 could probably be slightly faster in non-bandwidth limited scenarios (low resolution 3D).
Memory bandwidth dropped from 25.6 GB/s to 14.9 GB/s. That's nearly a 25% loss and about equal to the real world performance losses. Hence, it's a 418.

Are the Pixel benchmarks true

Hey guys I just currently pre-ordered the Pixel and am a little worried about the benchmarks that have been released. Do you guys think these are accurate? On some of the articles I have read the clock speeds they are claiming it is running are the speeds of the 820 not the 821. I mean the 6p scored higher on benchmarks than the pixel. How can these be right with the newest processor?
Look at the hands on videos. You won't be worried about performance after that. Looks like Google has done a lot of optimization. Benchmarks don't tell the whole story.
Well, seeing as the 821 is to an 820 the same as an 801 is to an 800... i.e., its the same damned chip, not really sure why you would expect there to be a dramatic performance change?
The 821 shows a peak cpu frequency spec a bit higher than 820, but this doesn't mean that everyone who uses it is obligated to use the highest frequency.
So here is a little bit of information about CPU manufacturing;
Every CPU core is a little bit different. Some of them are stable at lower voltages and higher frequencies than others. The CPU specification indicates a MINIMUM frequency that it MUST be stable at while operating within the designed power envelope. In other words, another CPU may be able to operate at the higher frequency, but it won't do so within the designed power envelope -- it will require OVER VOLTING.
The CPUs are separated according to their levels of stability. Call that "binning". One of these CPUs that bins poorly might be called a Snapdragon 820, and one that bins well will be called a Snapdragon 821. Within each model name, there are further levels of distinction that are used to set the baseline voltages being applied, in order to minimize the voltage that they are fed, such that you can reduce the power consumption as much as possible.
So you can think of an underclocked Snapdragon 821 as a SUPER DUPER AWESOME binned Snapdragon 820, operating at a lower voltage, and therefore consuming less power.
Don't worry about benchmarks! What it matters is the SoC you have, how well disipated is the SoC, and most important, how the software is done (kernel, drivers, android, binaries, etc).
There could be many devices with same SoC and better scores, but at the end, they lag more etc.
For instance, my previous Z5 Compact (with Sony Android, which is similar to AOSP) and a much better SoC than my current N5X, imo lags more than my current Nexus 5X with a worse SoC.
There's no way you can choose a device based on the benchmark, you must try both devices by yourself (ideally with your apps) and see the difference.
Giving another example...A Nexus 5 2013, is extremely fast in KK (with ART) and even in MM (but not in Lollipop).
However, it still throttles much more than a 5X because of the frequency, nm, and many other things.
doitright said:
Well, seeing as the 821 is to an 820 the same as an 801 is to an 800... i.e., its the same damned chip, not really sure why you would expect there to be a dramatic performance change?
The CPUs are separated according to their levels of stability. Call that "binning". One of these CPUs that bins poorly might be called a Snapdragon 820, and one that bins well will be called a Snapdragon 821. Within each model name, there are further levels of distinction that are used to set the baseline voltages being applied, in order to minimize the voltage that they are fed, such that you can reduce the power consumption as much as possible.
So you can think of an underclocked Snapdragon 821 as a SUPER DUPER AWESOME binned Snapdragon 820, operating at a lower voltage, and therefore consuming less power.
Click to expand...
Click to collapse
There actually are some differences in the 821 vs the 820. It's not the same chip exactly. A pretty great breakdown is here: https://www.gizmotimes.com/comparison/snapdragon-821-vs-snapdragon-820/16403
But essentially, slightly better power savings, improved camera performance, and a VR SDK.
Thanks for all the replies guys. I was just confused as to why a chip the snapdragon says should have a 10% increase in performance over the 820 is benchmarking lower than most 820's.
Good info, thanks guys!
We know nothing yet, time will tell obviously. The videos in the early previews look great, but we'll see under heavy load how these perform.
jbrooks58 said:
Hey guys I just currently pre-ordered the Pixel and am a little worried about the benchmarks that have been released. Do you guys think these are accurate? On some of the articles I have read the clock speeds they are claiming it is running are the speeds of the 820 not the 821. I mean the 6p scored higher on benchmarks than the pixel. How can these be right with the newest processor?
Click to expand...
Click to collapse
I'd like it if you could actually find something that claims that the 6p is anywhere near pixel in performance benchmarks. Reality is that it is more than 2x faster across the board.
As far as comparing it with 820, there are two things you can accomplish with the "1" -- more speed, or less power. They seem to be opting for the latter.
All the benchmarks I could find show it against either apple, or samsuck. Samsuck is well known for building TO the benchmarks (sometimes even *cheating*), which causes their scores to be unnaturally high, and comparing against apple is just stupid, since there is no baseline between them due to architectural differences and a complete lack of a common software stack. In other words, in a comparison between pixel and anything made by apple, you could have a smaller number, despite *actually* being considerably higher. The number doesn't equate across platforms.
---------- Post added at 08:18 PM ---------- Previous post was at 08:09 PM ----------
jbrooks58 said:
Thanks for all the replies guys. I was just confused as to why a chip the snapdragon says should have a 10% increase in performance over the 820 is benchmarking lower than most 820's.
Click to expand...
Click to collapse
That 10% is an interesting figure.
The SD820 has clock rates of 2.15 GHz on 2 cores, and 1.59 GHz on the other 2 cores.
Multiply by 1.1 (add 10%) and you get 2.365 and 1.749 GHz.
The SD821 has clock rates of 2.34 GHz on 2 cores and 2.19 GHz on the other 2 cores.
On those first two cores, that is marginally more the 10% higher clock rate. On the other 2 cores, it is considerably more than 10%. Note that a system's performance does NOT scale linearly with CPU frequency.
The other thing to note is that the pixel specs show it operating at 2x2.15+2x1.6 GHz, just like the SD820.
So what we can read from that, is that the pixel's CPUs are **underclocked**. That will allow it to use less battery power, and run cooler, while still running *really really fast*. If you want more, unlock and clock it up to 821 spec, I think you will find that this phone is an "overclocker's" dream, even if it isn't really overclocking.
That 10% figure comes directly from Qualcomm's publications on performance for the 821 vs 820.
craig0r said:
There actually are some differences in the 821 vs the 820. It's not the same chip exactly. A pretty great breakdown is here: https://www.gizmotimes.com/comparison/snapdragon-821-vs-snapdragon-820/16403
But essentially, slightly better power savings, improved camera performance, and a VR SDK.
Click to expand...
Click to collapse
Good read, thanks.

Categories

Resources