TI OMAP 3630 vs Qualcomm Snapdragon - Android Software/Hacking General [Developers Only]

Having a droid, I have dug into the details of TI's omap platform and I heard that the Xtreme will have the 3630. I am very impressed with the whole omapzoom.org and the platform itself. I am not at all familiar with the Qualcomm offerings. Anyone up for discussing the differences between these two platforms to include the advantages and disadvantages of each?
Cheers, jdeclue

TI Omap is based on the arm-cortex A8 tech which is supposedly to be more efficient and fast. Qualcomm has a long history of bad graphics support. The GPU is horrible. Well, its not exactly that bad but the drivers that are provided for the use for Qualcomm's GPU is often inadequate and hence underperforming. Sure the snapdragon is a whole lot better than the previous qualcomm SOCs (i.e. msm72xx series). But I feel that the reason for it is due to the higher clockspeed. The 1ghz speed tends to help with the peformance. But for me personally i would prefer the TI OMAP simply because it is a cortexA8 core which is better performing and the much better GPU.
So in a nutshell,
Qualcomm Snapdragon clockspeed maybe higher than the OMAP but the performance wise is comparatively close.
BUT however the OMAP GPU is better than Snapdragon.
With the ongoing increasingly graphically intensive trend, i think the way to go is with TI's OMAP. (Or untill qualcomm releases the 1.5ghz dual core snapdragon, then i would consider it )
There. That's my take.

Related

msm7200 520mhz vs. xscale 800mhz?

Is there much of difference beside clock speed. from the msm7200 in the touch pro 2 and the xscale running at 800mhz in the omnia II in terms of performance? ANy help would be appreciated.
If you ask me it (should) make quite a difference. The msm7200 is quite notorious for it's quite allright clockspeed but slow performance.
I used to have a Diamond (with 528 mhz) and then got a Omnia (Marvell 624 mhz) which was already quite a difference. I guess the 800 mhz will make even more of a difference.
Do note that there is a big difference in resolution between the Diamond and the Omnia, so that will also give some speed increase. The Omnia II has a Samsung 800 mhz chip (as far as I know) and I don't know what kind of performance that will give.
Both cpu's are a ARMv6 (afaik), so in that perspective you could say the 800 mhz is faster than the 528 mhz.
See this:
Samsung chip: http://pdadb.net/index.php?m=cpu&id=a6410&c=samsung_s3c6410
Qualcomm chip: http://pdadb.net/index.php?m=cpu&id=a7200a&c=qualcomm_msm7200a
My old diamond was much slower than my current ipaq 211. The ipaq has a 624mhz marvell and is much faster and more responsive than the diamond. It can also play videos back much better. 800mhz would just increase the performance gap.
The Omnia II is Arm11 which is slightly faster than the iphone 3G(both get blown out of the water by the 3Gs), and should support OpenGL ES 2.0.
Here's Samsungs Data sheet on it: http://www.samsung.com/global/syste.../2008/5/30/785500s3c6410_datasheet_200804.pdf
The msm7200, i BELIEVE(dont quote me), would be faster than the SC36410, if it had proper drivers.. however, thats not the case.
numbers are an indicator and nothing more. They give you a clue but clues can be very misleading. If they were usefull, why would you need benchmarking?
http://en.wikipedia.org/wiki/Megahertz_myth
(yeah i know wiki is sometimes full of BS but it certainly backs up what I learned in Uni and during my assembler cracking/virus writing days)
The ONLY way to compare CPU's is to run the same application and then run the SAME task in that application. Once you have done so ALL you can say is "For performing task X in application Y, processor ZXY running operating system ABC is faster on the BLAHBLAH platform" and nothing more. It does not mean its faster at everything or indeed, you cannot say its faster than ANYTHING else until you have tested it.
At the end of the day, the processor is affected by drivers, processor design and the operating system and its installaition.
and Software.. if theres no apps that incorporates acceleration, then its wasted.
what about qualcomm 1G snapdragon cpu?
how fast is that compares to current 528mzh? haha
i'm waiting for Acer S200 with 1G cpu.
netnerd said:
what about qualcomm 1G snapdragon cpu?
how fast is that compares to current 528mzh? haha
i'm waiting for Acer S200 with 1G cpu.
Click to expand...
Click to collapse
please re-read and if you still don't understand, i'll try and explain again.
clearly, marvell is better, even the mhz is lesser than what qualcomm offers!
having better decoders also like video playback & multitasking when video is playing!
waiting for devices with marvell cpu pxa168 series.
they use qualcomm chip becoz its cheaper & provide hsdpa to network & GPS module while the rest does not come with it. so individual chip must be used. but its better like GPSone VS SiRF III

Best SOC! Tegra 2 OMAP 4 Snapdragon 2

Hey guys there was a great thread before about tegra vs. Snapdragon. With the recent release of new chips such as tegra 2 and Qualcomm QSD8672(snapdragon 2) I wanted to see which chipset is more powerful and offers the best battery life. Omap 4440 Qualcomm QSD8672 or tegra 2.
I'm not excited about Snapdragon 2 because it's just based on current A8 architecture but overclocked and better gpu.
What interests me are OMAP4 and TEGRA2 because both are based on next generation A9 cpu. Although both announced as dual cpu's OMAP4 will be released with single core variant which early phones might adapt. Unlike TEGRA2 which is assured to be dual core.
LG's Optimus range will have the Tegra 2!!! Im so excited to see this! Next week Tuesday is the big reveal.
can someone tell me which soc have the best gpu and which does better rander triangles per second
.
What is better dual core or single core??
SupremeBeaver said:
LG's Optimus range will have the Tegra 2!!!
Click to expand...
Click to collapse
The Optimus 2X does but the Optimus 3D uses the OMAP 4430 SoC.
CARLITOZ18 said:
What is better dual core or single core??
Click to expand...
Click to collapse
It honestly all depends on how you're looking at it.
Battery- Dual core most likely, since it's total tdp is about a much as a single cores tdp.
Performance- This varies a lot on the system. If the system is multi-core ready, then multitasking will be better. If the app is multi-core ready, the dual core will as long as the single core isn't say 2.5x clocked. Remember something, double the cores does not mean double the performance.
Speed- Again, this varies on the system and app.
Really, I feel that multiple cores and threads are the way to go. Clocking the cpus higher is only going to raise the power usage up. Multiple core systems can be more efficient, if the system makes use of it.
I'd say the OMAP4 from Texas Instruments is the top dog as of this moment in the SoC category, especially considering Tegra 2's Achilles heel in the high profile 720P/1080P dept.

A good OMAP 3640 vs snapdragon vs humming bird article

CPU performance from the new TI OMAP 3640 (yes, they’re wrong again, its 3640 for the 1 GHz SoC, 3630 is the 720 MHz one) is surprisingly good on Quadrant, the benchmarking tool that Taylor is using. In fact, as you can see from the Shadow benchmarks in the first article, it is shown outperforming the Galaxy S, which initially led me to believe that it was running Android 2.2 (which you may know can easily triple CPU performance). However, I’ve been assured that this is not the case, and the 3rd article seems to indicate as such, given that those benchmarks were obtained using a Droid 2 running 2.1.
Now, the OMAP 3600 series is simply a 45 nm version of the 3400 series we see in the original Droid, upclocked accordingly due to the reduced heat and improved efficiency of the smaller feature size.
If you need convincing, see TI’s own documentation: http://focus.ti.com/pdfs/wtbu/omap3_pb_swpt024b.pdf
So essentially the OMAP 3640 is the same CPU as what is contained in the original Droid but clocked up to 1 GHz. Why then is it benchmarking nearly twice as fast clock-for-clock (resulting in a nearly 4x improvement), even when still running 2.1? My guess is that the answer lies in memory bandwidth, and that evidence exists within some of the results from the graphics benchmarks.
We can see from the 3rd article that the Droid 2’s GPU performs almost twice as fast as the one in the original Droid. We know that the GPU in both devices are the same model, a PowerVR SGX 530, except that the Droid 2’s SGX 530 is, as is the rest of the SoC, on the 45 nm feature size. This means that it can be clocked considerably faster. It would be easy to assume that this is reason for the doubled performance, but that’s not necessarily the case. The original Droid’s SGX 530 runs at 110 MHz, substantially less than its standard clock speed of 200 MHz. This downclocking is likely due to the memory bandwidth limitations I discussed in my Hummingbird vs Snapdragon article, where the Droid original was running LPDDR1 memory at a fairly low bandwidth that didn’t allow for the GPU to function at stock speed. If those limitations were removed by adding LPDDR2 memory, the GPU could then be upclocked again (likely to around 200 MHz) to draw even with the new memory bandwidth limit, which is probably just about twice what it was with LPDDR1.
So what does this have to do with CPU performance? Well, it’s possible that the CPU was also being limited by LPDDR1 memory, and that the 65 nm Snapdragons that are also tied down to LPDDR1 memory share the same problem. The faster LPDDR2 memory could allow for much faster performance.
Lastly, since we know from the second article at the top that the Galaxy S performs so well with its GPU, why is it lacking in CPU performance, only barely edging past the 1 GHz Snapdragon?
It could be that the answer lies in the secret that Samsung is using to achieve those ridiculously fast GPU speeds. Even with LPDDR2 memory, I can’t see any way that the GPU could achieve 90 Mtps; the required memory bandwidth is too high. One possibility is the addition of a dedicated high-speed GPU memory cache, allowing the GPU access to memory tailored to handle its high-bandwidth needs. With this solution to memory bandwidth issues, Samsung may have decided that higher speed memory was unnecessary, and stuck with a slower solution that remains limited in the same manner as the current-gen Snapdragon.
Lets recap: TI probably dealt with the limitations to its GPU by dropping in higher speed system RAM, thus boosting overall system bandwidth to nearly double GPU and CPU performance together.
Samsung may have dealt with limitations to the GPU by adding dedicated video memory that boosted GPU performance several times, but leaving CPU performance unaffected.
This, I think, is the best explanation to what I’ve seen so far. It’s very possible that I’m entirely wrong and something else is at play here, but that’s what I’ve got.
Click to expand...
Click to collapse
CPU Performance
Before I go into details on the Cortex-A8, Snapdragon, Hummingbird, and Cortex-A9, I should probably briefly explain how some ARM SoC manufacturers take different paths when developing their own products. ARM is the company that owns licenses for the technology behind all of these SoCs. They offer manufacturers a license to an ARM instruction set that a processor can use, and they also offer a license to a specific CPU architecture.
Most manufacturers will purchase the CPU architecture license, design a SoC around it, and modify it to fit their own needs or goals. T.I. and Samsung are examples of these; the S5PC100 (in the iPhone 3GS) as well as the OMAP3430 (in the Droid) and even the Hummingbird S5PC110 in the Samsung Galaxy S are all SoCs with Cortex-A8 cores that have been tweaked (or “hardened”) for performance gains to be competitive in one way or another. Companies like Qualcomm however will build their own custom processor architecture around a license to an instruction set that they’ve chosen to purchase from ARM. This is what the Snapdragon’s Scorpion processor is, a completely custom implementation that shares some similarities with Cortex-A8 and uses the same ARMv7 instruction set, but breaks away from some of the limitations that the Cortex-A8 may impose.
Qualcomm’s approach is significantly more costly and time consuming, but has the potential to create a processor that outperforms the competition. Through its own custom architecture configuration, (which Qualcomm understandably does not go into much detail regarding), the Scorpion CPU inside the Snapdragon SoC gains an approximate 5% improvement in instructions per clock cycle over an ARM Cortex-A8. Qualcomm appeals to manufacturers as well by integrating features such as GPS and cell network support into the SoC to reduce the need of a cell phone manufacturer having to add additional hardware onto the phone. This allows for a more compact phone design, or room for additional features, which is always an attractive option. Upcoming Snapdragon SoCs such as the QSD8672 will allow for dual-core processors (not supported by Cortex-A8 architecture) to boost processing power as well as providing further ability to scale performance appropriately to meet power needs. Qualcomm claims that we’ll see these chips in the latter half of 2010, and rumor has it that we’ll begin seeing them show up first in Windows Mobile 7 Series phones in the Fall. Before then, we may see a 45 nm version of the QSD8650 dubbed “QSD8650A” released in the Summer, running at 1.3 GHz.
You might think that the Hummingbird doesn’t stand a chance against Qualcomm’s custom-built monster, but Samsung isn’t prepared to throw in the towel. In response to Snapdragon, they hired Intrinsity, a semiconductor company specializing in tweaking processor logic design, to customize the Cortex-A8 in the Hummingbird to perform certain binary functions using significantly less instructions than normal. Samsung estimates that 20% of the Hummingbird’s functions are affected, and of those, on average 25-50% less instructions are needed to complete each task. Overall, the processor can perform tasks 5-10% more quickly while handling the same 2 instructions per clock cycle as an unmodified ARM Cortex-A8 processor, and Samsung states it outperforms all other processors on the market (a statement seemingly aimed at Qualcomm). Many speculate that it’s likely that the S5PC110 CPU in the Hummingbird will be in the iPhone HD, and that its sister chip, the S5PV210, is inside the Apple A4 that powers the iPad. (UPDATE: Indications are that the model # of the SoC in the Apple iPad’s A4 is “S5L8930”, a Samsung part # that is very likely closely related to the S5PV210 and Hummingbird. I report and speculate upon this here.)
Lastly, we really should touch upon Cortex-A9. It is ARM’s next-generation processor architecture that continues to work on top of the tried-and-true ARMv7 instruction set. Cortex-A9 stresses production on the 45 nm scale as well as supporting multiple processing cores for processing power and efficiency. Changes in core architecture also allow a 25% improvement in instructions that can be handled per clock cycle, meaning a 1 GHz Cortex-A9 will perform considerably quicker than a 1 GHz Cortex-A8 (or even Snapdragon) equivalent. Other architecture improvements such as support for out-of-order instruction handling (which, it should be pointed out, the Snapdragon partially supports) will allow the processor to have significant gains in performance per clock cycle by allowing the processor to prioritize calculations based upon the availability of data. T.I. has predicted its Cortex-A9 OMAP4440 to hit the market in late 2010 or early 2011, and promises us that their OMAP4 series will offer dramatic improvements over any Cortex-A8-based designs available today.
GPU performance
There are a couple problems with comparing GPU performance that some recent popular articles have neglected to address. (Yes, that’s you, AndroidAndMe.com, and I won’t even go into a rant about bad data). The drivers running the GPU, the OS platform it’s running on, memory bandwidth limitations as well as the software itself can all play into how well a GPU runs on a device. In short: you could take identical GPUs, place them in different phones, clock them at the same speeds, and see significantly different performance between them.
For example, let’s take a look at the iPhone 3GS. It’s commonly rumored to contain a PowerVR SGX 535, which is capable of processing 28 million triangles per second (Mt/s). There’s a driver file on the phone that contains “SGX535” in the filename, but that shouldn’t be taken as proof as to what it actually contains. In fact, GLBenchmark.com shows the iPhone 3GS putting out approximately 7 Mt/s in its graphics benchmarks. This initially led me to believe that the iPhone 3GS actually contained a PowerVR SGX 520 @ 200 MHz (which incidentally can output 7 Mt/s) or alternatively a PowerVR SGX 530 @ 100 MHz because the SGX 530 has 2 rendering pipelines instead of the 1 in the SGX 520, and tends to perform about twice as well. Now, interestingly enough, Samsung S5PC100 documentation shows the 3D engine as being able to put out 10 Mt/s, which seemed to support my theory that the device does not contain an SGX 535.
However, the GPU model and clock speed aren’t the only limiting factors when it comes to GPU performance. The SGX 535 for example can only put out its 28 Mt/s when used in conjunction with a device that supports the full 4.2 GB per second of memory bandwidth it needs to operate at this speed. Assume that the iPhone 3GS uses single-channel LPDDR1 memory operating at 200 MHz on a 32-bit bus (which is fairly likely). This allows for 1.6 GB/s of memory bandwidth, which is approximately 38% of what the SGX 535 needs to operate at its peak speed. Interestingly enough, 38% of 28 Mt/s equals just over 10 Mt/s… supporting Samsung’s claim (with real-world performance at 7 Mt/s being quite reasonable). While it still isn’t proof that the iPhone 3GS uses an SGX 535, it does demonstrate just how limiting single-channel memory (particularly slower memory like LPDDR1) can be and shows that the GPU in the iPhone 3GS is likely a powerful device that cannot be used to its full potential. The GPU in the Droid likely has the same memory bandwidth issues, and the SGX 530 in the OMAP3430 appears to be down-clocked to stay within those limitations.
But let’s move on to what’s really important; the graphics processing power of the Hummingbird in the Samsung Galaxy S versus the Snapdragon in the EVO 4G. It’s quickly apparent that Samsung is claiming performance approximately 4x greater than the 22 Mt/s the Snapdragon QSD8650’s can manage. It’s been rumored that the Hummingbird contains a PowerVR SGX 540, but at 200 MHz the SGX 540 puts out 28 Mt/s, approximately 1/3 of the 90 Mt/s that Samsung is claiming. Either Samsung has decided to clock an SGX 540 at 600 MHz, which seems rather high given reports that the chip is capable of speeds of “400 MHz+” or they’ve chosen to include a multi-core PowerVR SGX XT solution. Essentially this would allow 3 PowerVR cores (or 2 up-clocked ones) to hit the 90 Mt/s mark without having to push the GPU past 400 MHz.
Unfortunately however, this brings us right back to the memory bandwidth limitation argument again, because while the Hummingbird likely uses LPDDR2 memory, it still only appears to have single-channel memory controller support (capping memory bandwidth off at 4.2 GB/s), and the question is raised as to how the PowerVR GPU obtains the large amount of memory bandwidth it needs to draw and texture polygons at those high speeds. If the PowerVR SGX 540 (which, like the SGX 535 performs at 28 Mt/s at 200 MHz) requires 4.2 GB/s of memory bandwidth, drawing 90 Mt/s would require over 12.6 GB/s of memory bandwidth, 3 times what is available. Samsung may be citing purely theoretical numbers or using another solution such as possibly increasing GPU cache sizes. This would allow for higher peak speeds, but it’s questionable if it could achieve sustainable 90 Mt/s performance.
Qualcomm differentiates itself from most of the competition (once again) by using its own graphics processing solution. The company bought AMD’s Imageon mobile-graphics division in 2008, and used AMD’s Imageon Z430 (now rebranded Adreno 200) to power the graphics in the 65 nm Snapdragons. The 45 nm QSD8650A will include an Adreno 205, which will provide some performance enhancements to 2D graphics processing as well as hardware support for Adobe Flash. It is speculated that the dual-core Snapdragons will utilize the significantly more powerful Imageon Z460 (or Adreno 220), which apparently rivals the graphics processing performance of high-end mobile gaming systems such as the Sony PlayStation Portable. Qualcomm is claiming nearly the same performance (80 Mt/s) as the Samsung Hummingbird in its upcoming 45 nm dual-core QSD8672, and while LPDDR2 support and a dual-channel memory controller are likely, it seems pretty apparent that, like Samsung, something else must be at play for them to achieve those claims.
While Samsung and Qualcomm tend to stay relatively quiet about how they achieve their graphics performance, T.I. has come out and specifically stated that its upcoming OMAP4440 SoC supports both LPDDR2 and a dual-channel memory controller paired with a PowerVR SGX 540 chip to provide “up to 2x” the performance of its OMAP3 line. This is a reasonable claim assuming the SGX 540 is clocked to 400 MHz and requires a bandwidth of 8.5 GB/s which can be achieved using LPDDR2 at 533 MHz in conjunction with the dual-channel controller. This comparatively docile graphics performance may be due to T.I’s rather straightforward approach to the ARM Cortex-A9 configuration.
Power Efficiency
Moving onward, it’s also easily noticeable that the next generation chipsets on the 45 nm scale are going to be a significant improvement in terms of performance and power efficiency. The Hummingbird in the Samsung Galaxy S demonstrates this potential, but unfortunately we still lack the power consumption numbers we really need to understand how well it stacks up against the 65 nm Snapdragon in the EVO 4G. It can be safely assumed that the Galaxy S will have overall better battery life than the EVO 4G given the lower power requirements of the 45 nm chip, the more power-efficient Super AMOLED display, as well as the fact that both phones sport equal-capacity 1500mA batteries. However it should be noted that the upcoming 45 nm dual-core Snapdragon is claimed to be coming with a 30% decrease in power needs, which would allow the 1.5 GHz SoC to run at nearly the same power draw of the current 1 GHz Snapdragon. Cortex-A9 also boasts numerous improvements in efficiency, claiming power consumption numbers nearly half that of the Cortex-A8, as well as the ability to use multiple-core technology to scale processing power in accordance with energy limitations.
While it’s almost universally agreed that power efficiency is a priority for these processors, many criticize the amount of processing power these new chips are bringing to mobile devices, and ask why so much performance is necessary. Whether or not mobile applications actually need this much power is not really the concern however; improved processing and graphics performance with little to no additional increase in energy needs will allow future phones to actually be much more efficient in terms of power. This is because ultimately, power efficiency relies in a big part on the ability of the hardware in the phone to complete a task quickly and return to an idle state where it consumes very little power. This “burst” processing, while consuming fairly high amounts of power for very short periods of time, tends to be more economical than prolonged, slower processing. So as long as ARM chipset manufacturers can continue to crank up the performance while keeping power requirements low, there’s nothing but gains to be had.
Click to expand...
Click to collapse
http://alienbabeltech.com/main/?p=19309
http://alienbabeltech.com/main/?p=17125
its a good read for noobs like me, also read the comments as there is lots of constructive criticism [that actually adds to the information in the article]
Kind of wild to come across people quoting me when I'm just Googling the web for more info.
I'd just like to point out that I was probably wrong on the entire first part about the 3640. I can't post links yet, but Google "Android phones benchmarked; it's official, the Galaxy S is the fastest." for my blog article on why.
And the reason I'm out here poking around for more information is because AnandTech.com (well known for their accurate and detailed articles) just repeatedly described the SoC in the Droid X as a OMAP 3630 instead of the 3640.
EDIT - I've just found a blog on TI's website that calls it a 3630. I guess that's that! I need to find a TI engineer to make friends with for some inside info.
Anyhow, thanks for linking my work!
Make no mistake, OMAP 3xxx series get left in the dust by the Hummingbird.
Also, I wouldn't really say that Samsung hired Intrinsity to make the CPU - they worked together. Intrinsity is owned by Apple, the Hummingbird is the same core as the A4, but with a faster graphics processor - the PowerVR SGX 540.
There was a bug in the Galaxy S unit they tested, which was later confirmed in the authors own comments later on.

[Q] Snapdragon Vs Hummingbird

It seems like every review I read tells me the Hummingbird processor is much better and a more capable processor then the snapdragon. If this is true why hasn't HTC produced a phone with a Hummingbird CPU? HTC is way ahead in the android game experience wise but they haven't seemed to make any major hardware changes other then upping the rom and ram and speed of the cpu. Would someone please tell me that samsungs new CPU isn't as good as all the reviews say it is
I can't comment on the CPUs but I would just like to mention that costs, failure rate etc. all play a role in the decision about what CPU to use. There may be reasons for HTC not using the Hummingbird yet other then performance.
The Hummingbird is a little faster CPU-wise, which does surprise me as the Snapdragon has higher quoted instructions per clock. However, its GPU is really in a different league to that in the current Snapdragons and that's where the big difference lies in benchmarks (though that will, of course, only help GPU-accelerated things).
HTC don't necessarily use the chip that benchmarks the best on any given day - they have a relationship with Qualcomm and presumably get preferential rates from them, and they have an established platform. The good news for them is that Qualcomm are bringing out new Snapdragons with higher clockspeeds and better GPUs (and dual cores, though it remains to be seen if/when they arrive on phones) so they should equalise and quite probably turn the tables soon.
Competition is good - we were stuck with 5-600MHz XScales and ARM11s for ages, and now things are finally moving along.

[Q] K3 vs PXA310 vs MT6516?

Hello forum!
I've found quite a few phones which use the Huawei Hisilicon K3 but I've also found others using a MediaTek MT6516 CPU or a Marvell PXA310 CPU. There were phones running Qualcomm CPUs (like the MSM7200, MSM7225 and the MSM7600) but I think they're crap.
The K3 runs at 460MHz, the MT6516 runs at 416MHz+280MHz (as it's dual-core) and the PXA310 runs at 624MHz. Just for reference the Qualcomm CPUs all run at 528MHz.
Which would be best? (and for me that means speed. )
I'm bumpimg this so it doesn't get lost.
YES Qualcomm low end cpus are crap....all thses MSM7200 7225 7600,actually...MSM7200 OR MSM7XXX are not such bad as you think...PXA310 has quite different structure to MSM7xxx(65NM,45NM not included)
PXA310is better if you just take care of CPU SPEED,and PXA310 integrated video accelerator,more power in media use
MSM7XXX also has its merit,ARM11 and ARM9 CPUs...better than PXA310 in general use and APPS
MTK solution seems to be similar to MSM7XXX....but Qualcomm cpus get better ...
and K3...i haven't use any device with K3 before....so i have no ideal about it..
So it's something like this:
(Qualcomm) MSM7xxx - Best for apps (multi-tasking?)
(MediaTek) MT6516 - Good for apps (considering its price)
(Marvell) PXA310 - Best for media (better decoders?)
(Huawei) K3 - Unknown (for the moment)
Thanks for helping me. It also helps you originate from China since some of these processors and chipsets are more in-market in China.
I won't consider the MT6516 then because it's a 2G chipset and while it's okay it may not be 'the one'.
So out of the K3 (ARM926-based) and the PXA310 (XScale-based) which would be best? (Qualcomm could be considered, I guess.)
bricky149 said:
I won't consider the MT6516 then because it's a 2G chipset and while it's okay it may not be 'the one'.
So out of the K3 (ARM926-based) and the PXA310 (XScale-based) which would be best? (Qualcomm could be considered, I guess.)
Click to expand...
Click to collapse
you meam just have a choice in MSM7XXX series?
Or you just want better ones?Obviously,45NM products would be more appealing....
for only 65NM and something like that,i would say OMAP cpus(get the least power consumption and Qualcomm get mostly best general performance(ok,if you dont think too much about its GPU and power use)
Some people says Qualcomm 65NM CPUS are damn Low-level performance with a high frequency craps....yes,they R...but as their high frequency they have commendable general use capability....if you are not a cell video fan,just get a Qualcomm cpu for almost all use(in multi-tasking? of course!)
ATM I'm looking into budget(?) smartphones with good specs. China's home to those sort of items I'm looking for so I'd thought I'd look.
Qualcomm's CPUs are okay but from various videos I'm not convinced it's for me. It's mainly due to driver issues so it could be faster in one phone compared to another. It's one reason I'm keeping my choices open.
Regarding the PXA310 I have no idea how well it performs so that's the reason I'm asking.

Categories

Resources