Tegra or Snapdragon - General Topics

Hi everybody, I just have some questions.
I plan to change my HTC Hermes next year but I don't know which based-device will be the best...
Snapdragon or Tegra.
Tegra seems to have 8core of execution for great graphics but not a big frequency(600-800Mhz). Snapdragon got the Ghz and is supposed to reach 1.3Ghz in 2010. There is also a dual core snapdragon 2x1.5Ghz supposed to be available this year but will it be for smartphones?
These are the questions I have because a PDA is a lot of money for me and I wanna choose the right device...
Thanks

Well snapdragon is multi core SoC just like Tegra but what nvidia is so proud of is power island. It means that they can shut off unneeded module(ex. turn off all modules except of modem when in standby). Tegra uses ARM11 CPU where snapdragon is based on improved cortex A8 besides it is clocked at 1Ghz so tegra can't win this one. GPU is better on tegra and probably video performance is better too but when it comes to brute force snapdragon wins hands down.
I think that is all you need to know about tegra and snapdragon. About that 2x1,5Ghz snapdragon it is designed to be used on smartbooks. It would be an overkill for smartphone at least for now.

Thanks that's all I wanted to know

also a Mhz is not just a Mhz
first of all a qualcomm mhz could mean more or less performance boots then a OMAP mhz
not to mention it don't really matter if the cpu is super fast if the ram and storage and other IO of a device can't keep up

joplayer said:
Tegra seems to have 8core of execution for great graphics but not a big frequency(600-800Mhz). Snapdragon got the Ghz and is supposed to reach 1.3Ghz in 2010.
Click to expand...
Click to collapse
Tegra is just like the Snapdragon a SoC. If we use the same logic that Nvidia used, then the Snapdragon is also a multi core SoC ( CPU, GPU, DSP, ... ). But its just marketing to make it look to people that they get a 8 Cpu system
Like Wishmaster89 pointed out, there is a major difference between the CPU's used on both system.
The 600Mhz Arm11 ( ArmV6 ) on the Tegra is capable off executing, about 1/3th what the Snapdragon's ArmV7 1Ghz Cpu can do.
The GPU on the other hand, is more powerful in the Tegra. There is a little list being used to compare the overall ( theoretical ) strengths off each platform's GPU
Nintendo DS: 120,000 triangles/s, 30 M pixels/s
PowerVR MBX-Lite (iPhone 3G): 1 M triangles/s, 100 M pixels/s
Samsung S3C6410 (Omnia II): 4 M triangles/s, 125.6 M pixels/s
ATI Imageon (Qualcomm MSM72xx): 4 M triangles/s, 133 M pixels/s
PowerVR SGX 530 (Palm Pre): 14 M triangles/s, ___ M pixels/s
ATI Imageon Z430 (Toshiba TG01): 22 M triangles/s, 133 M pixels/s
PowerVR SGX 535 (iPhone 3GS): 28 M triangles/s, 400 M pixels/s
Sony PSP: 33 M triangles/s, 664 M pixels/s
PowerVR SGX 540 (TI OMAP4): 35 M triangles/s, 1000 M pixels/s
Nvidia Tegra APX2500 (Zune HD): 40 M triangles/s, 600 M pixels/s
ATI Imageon _ (Qualcomm QSD8672): 80 M triangles/s, >500 M pixels/s
Click to expand...
Click to collapse
So, the Tegra's GPU is about twice as powerful as the Snapdragon's ATI Z430 ( looking at Triangles ). The reason why i use the term theoretically is because a lot off factors can make or break a GPU ( many more then on a CPU ). Bad drivers, bandwidth limitations, to little memory, bad mix off texture units, vertex units etc..
Problem with Nvidia is, they have always had the habit off exaggerating things ( a lesson learned more then a few times in the past ).
Another problem is, are the GPU's actually being used on the PDA/Smartphone's? A lesson i learned in the past from the x50v, with its own dedicated powerful ( in that time ) 2700g ( 800.000 Triangles in that time ). The reality is, most applications rely the most on the CPU.
At best, if you have dedicated games, written for the PDA/Smartphone market, very few will tap in to all the power that the Tegra has to offer.
Even the PSX Emulators ( who run great ( full speed 50/60fps pal/ntsc games ) ) on the Snapdragon. Forget about running a lot off psx games on a Arm11 without tweaking ( and frame skipping ). Because it relies the most on brute force cpu power ( and this is where the Snapdragon shines ).
So? What is there besides games? Video playback? Sure... The Tegra can supposedly do 1080p, while the TI OMAP & Snapdragon's only do 720p. But from what i have read, its more to the DSP that does the work. The snapdragon's DSP runs at 600Mhz, i don't find any information about the Tegra's DSP? Does it even have any? Anybody with more info how they even handle things?
When it comes down to PDA/Smartphone's... take it from me. The most important thing is first the CPU. Then the amount off memory ( and memory speed ). Then the GPU.
Lets just say i like to see a fair comparison between both systems, to see there real power ( and not some nvidia fake PR where a lot off people still fall in ).
Like i said, i don't exactly trust Nvidia's numbers when there PR posts crap like this:
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Those numbers are what you can call a pure lie. When people from the OpenPandora project ( what uses a TI Omap3630 @ 600Mhz, with a slower GPU ), is able to run quake3 at 35+ fps... Yet, Nvidia claims 5fps for the Snapdragon, thats actually more powerful then the TI Imap3630... I love those little [*] next to the text... Small text below: "* NVIDIA estimates". In other words, how much trust can somebody place in the specs from a company that that pulls stunts like that.
Also... Snapdragon is used in the following smartphones that i know off: Toshiba TG01, Asus F1 ( S200 ), HTC HD2 ( Leo ), and a few more that are on the way. Where is the Tegra? The MS Zune... Thats it...
You think that HTC, Toshiba, Asus will all have looked at the different available SOC providers ( TI, qualcomm, Samsung, Nvidia etc ). Yet ... Who do they pick for there new top off the line products...
I hope this helps...

OP, therw isn't much to add after all that expert info, but I can make it easy for you. SD = raw power, Tegra = fancy graphics. I prefer power, because of the better overall performance.

as i see it the tegra chip has 2 600mhz cores + 6 other cores to do video, audio etc.
so a 1ghz snapdragon would have to split it mhz to deal with any audio, video etc whilst the tegra chip would have separate cores dealing with this stuff leaving 2 600mhz cores free.
this would make tegra a lot faster than snapdragon.

one thing which would be interesting would be batt life
in various situations
and excluding the atom as it's not really a phone cpu

one thing of note is that every snapdragon phone, although seems fast still has the standard wm lag at times (probably more wm that the cpu).
whilst the zune hd looks super smooth and very fast.
we will have to wait for the first tegra wm phone to see if it has the wm lag as its hard to tell by comparing a mp3/4 player (which has a os which was probably made from the ground up to run on the chip) to a phone.

Ganondolf said:
as i see it the tegra chip has 2 600mhz cores + 6 other cores to do video, audio etc.
so a 1ghz snapdragon would have to split it mhz to deal with any audio, video etc whilst the tegra chip would have separate cores dealing with this stuff leaving 2 600mhz cores free.
this would make tegra a lot faster than snapdragon.
Click to expand...
Click to collapse
You're completely wrong! As I said both are multi core SoC's. Both snapdragon and tegra have separate cores for video and audio! The only difference is that tegra can shut off unneeded module where snapdragon can't. Besides they know that their CPU is slow so they have to give people something that will make them forget about CPU so they decided that talking about 8 cores on something as small as their SoC would be a good choice.
As I said before raw CPU power of snapdragon is at least 3x greater than tegra and zune HD is smoother because all the work is done on the GPU(besides the whole Zune OS 4.0 was probably designed on tegra so don't expect it to lag) where WM is only CPU driven. Besides wait for HTC Leo to see almost lag free device(show me device that never lags).
For the last time. For know tegra has slow CPU where Snapdragon has a beast for CPU. Things should change with tegra2 and snapdragon2.

Ganondolf said:
as i see it the tegra chip has 2 600mhz cores + 6 other cores to do video, audio etc.
so a 1ghz snapdragon would have to split it mhz to deal with any audio, video etc whilst the tegra chip would have separate cores dealing with this stuff leaving 2 600mhz cores free.
this would make tegra a lot faster than snapdragon.
Click to expand...
Click to collapse
*uch* So much misinformation... I may not be a expert, but you just claimed that the Snapdragon needs to split its mhz, to do ... video? Did you even read that snapdragon's specs. Dedicated ... GPU. GPU = Video!
Another wrong point, is that both cores are not at 600Mhz. One core is at 600Mhz, and one Core is at 400Mhz. The 600Mhz core is a ARM11 core, and the 400Mhz, is a Arm7 core ( not to be confused with the ArmV7 aka Cortex A8 ).
The basic idea is, when a phone is in standby, that the 400Mhz Arm 7 core, does the basic staying alive stuff. Where as the 600 Arm11 core, is only used for the big stuff. The basic idea is good.
But, the Snapdragon 1Ghz ArmV7 Cpu is able to downscale, and reduce its power footprint also. What solution is the better one ... We will needs to see.
To put things in perspective:
Tegra:
* ARM 11
* ARM 7
* GPU
* 2D Engine
* HD Video Encoder
* HD Video Decoder
* Audio
* Imaging
Snapdragon
* ARM v7 ( Cortex A8 )
* GPU
* DSP
* HD Video Decoder
* ...
Now... You will say. Hey, look at all those extra cores that the Tegra has. Must be a power house... No ... It does not work like that.
The Snapdragon's 600Mhz DSP has several capabilities, including dedicated Image processing, etc. The question is, how fast is the Image processor for the Tegra? If its a separate core, it has its own frequency. This alone make a big difference, because the slow that core, the longer it takes to do the job ( and the more power drain ).
The 600Mhz Tegra that we are comparing here, has only a 720p output capability. Just like the Snapdragon. As far as i can tell, the Tegra 600 is used in the Zune. Something tells me that the Tegra 650 is more for notebooks.
HD Encoding / HD decoding. By any definition, that is part off the GPU. Just like the ATI Z430 has its own dedicated HD capabilities. And any GPU these days has the ability to disable part off its to save power. So we can assume that the same capability is in the mobile variant. The Z430 is based on the GPU found in the x360. It has its own HD, audio, media, etc processing capabilites ( aka, if you like to call it in Nvidia's term... HD, Audio, Media Core's ).
So, from a technical point of view, the Snapdragon has also 8 cores. Hell, we can trump that, because the DSP is capable off more then just Image processing. So, how many extra cores can be gain from that?
To be honest, there is so much misinformation that people jump on... Its actually kinda incredible ( and frightening )... While i need to admit, when looking at the Google links, Nvidia did a good job at spreading the FUBAR information. Most sites took over the information, without questioning it one little bit...
Lag?
And Ganondolf regarding the lag that you report? To be honest, i have shown several movies to a friend with WM6.5 + Touchflow backported on older HTC devices ( devices with the same slow cpu's, like the Tegra uses ). Guess what... Beyond a bit off lag on the Image viewer, they had no lag.
Take a look at the Video's off the HTC HD2 ( Snapdragon ) ... And find the lag there please...
I have seen a few people like you before on other forum's, going around all high & mighty about the Tegra. At first i was impressed by its general specs. Until you start to look deeper, and discover that the CPU is slow as hell ( and the second one is even worse ) compared to the Snapdragon / Cortex A8 / ArmV7 design. That the "extra" cores, are just functionality provided from the GPU. And that its 1080p claim, does not come from the version now used.
In fact, Snapdragon also has 1080p capability. See the QSD8672. But you will not find that SmartPhone's just yet. Just like the Tegra 650 with its 1080p. Has anybody even seen a Tegra 650 on the market? I don't think so ( for good reason ). Looks like another Paper launch from Nvidia.

Simply put:
As of July, 2009 or Oct 2009 for that matter:
Snapdragon mobile phones = shipping.
Tegra mobile phones = vapourware. (not even any firm rumours)

Benjiro said:
Lag?
And Ganondolf regarding the lag that you report? To be honest, i have shown several movies to a friend with WM6.5 + Touchflow backported on older HTC devices ( devices with the same slow cpu's, like the Tegra uses ). Guess what... Beyond a bit off lag on the Image viewer, they had no lag.
Take a look at the Video's off the HTC HD2 ( Snapdragon ) ... And find the lag there please...
Click to expand...
Click to collapse
the lag i was talking about was on the toshiba tg01 which i have played with. there is no point saying look at videos of the htc hd2 as i saw vids of the tg01 which looked like it was lag free, till the hd2 comes out and i have a play i (we) wont be able to tell if its lag free or not. as i can see u are making your argument about lag on a phone that has not been released which i think is a rubbish argument, as someone could say a tegra phone could teleport you across the world (there is no proof).
Also im not on the tegra bandwagon as i like snapdragon just as much, i was going by what i had heard on the net. maybe like you said information has been made to look like the tegra chip is super powerful compared to all the other phone cpu's, what is not true but till i see a phone with a tegra chip in it how would we know?

agitprop said:
Simply put:
As of July, 2009 or Oct 2009 for that matter:
Snapdragon mobile phones = shipping.
Tegra mobile phones = vapourware. (not even any firm rumours)
Click to expand...
Click to collapse
By far the most important point.
Far more important than the MHz number which may or may not even indicate greater or lesser performance or battery life than a competitor with an entirely different architecture.

There is one piece of info that I haven't been able to find. Which one of the two has better performance when it comes to battery power usage?
Anyone?

Tegra is right on the ball.
Yes, the ARM11 cpu is theoretically 1/3 the speed of the Cortex but don't forget there's an ARM7 offloading network traffic, 2D acceleration separate from the CPU and GPU, dedicated HD encoding hardware (decoding is common on both) and sound acceleration. Many of the processing bottlenecks in a mobile device are successfully offloaded in the tegra, ultimately giving the ARM11 less tasks to cope with in the first place, and no need for thread balancing which, fingers crossed, leads to more stable os performance. Another thing to note is that nVidia's official specs say ARM11 MPCore, which means that various tegra chips could have anywhere from 1 to 4 ARM11 cores (the tegra chipset used in the Microsoft Zune player was a duel-core ARM11).
The main point though I think is the power. You don't need a massive CPU in a mobile device, what you need is battery life, which although we haven't received final figures, the tegra is looking infinitely more impressive than anything else on the market. If my iPhone 3GS is anything to go off even x2 the battery life would be welcome, this thing dies in no time at all be it browsing the web, playing video or music; reviews show snapdragon phones to be even worse than this. The nVidia specs regarding battery in earlier posts are mostly accurate but based on a netbook battery. The Zune HD running the tegra has 33hours of audio, 8.5 hours of video, however uses only a 660mAh battery; this is half the size of the battery on the iPhone 3GS and HTC Touch HD2 for example.
The tegra GPU is a powerful CUDA based design and will allow for GPGPU acceleration of the only major computationally intensive task that phones are likely to do in the future which is image processing for augmented reality.
They've provided on-chip support for most modern input/output devices.
nVidia have covered all the bases, I'm seriously looking forward to tegra phones.

Yes, but as I've learned (the hard way) from my Touch Pro, all the features in the world mean nothing if they're not used. Touch Pro was supposed to have video acceleration and double the speed of my old Tytn. Where are those? Nowhere. Why? Some say "there aren't any drivers for the GPU", others say that TPs processor may be 500MHz, but its design is worse than the one in my older Tytn...
I don't care. As a customer, user and buyer, I know that my older phone was faster than my new one. If in the near future we have a Snapdragon 1GHz phone that does everything in its CPU and a Tegra phone that ballances cpu-gpu-physics-whatever in different parts of its design, history says that the Snapdragon will be the better choice. You see, WM Solitaire, Word Mobile, RSS Readers, Twitter clients and all existing software, at least for WM, is written to run on a single processor. I've yet to see a good program/game that will actually take advantage of any devices GPU - and that won't happen while the market is split, for a developer would need to create his program for a specific device (meaning less profit) or simply forego any acceleration and create something "that runs anywhere". We can thank Microsoft for going the Linux way and advocating device makers doing whatever they want, whichever way they want, without some standard way of using different hardware parts (like, say, DirectX in Windows).

very interesting informations.
Battery life is really important, that's at the moment the only advantage of the Tegra vs SN.
I am really keen to know if Manila works also fast with less CPU-Power of the Tegra-Chip as the Leo.
There must be some driver or software problem I would say - because there's no PDA out with the Tegra.
Also no announcement... otherwhise it could be also a strategy from HTC that they didn't get a problem in selling the Leo and oncoming Android-device.
So we must w8...

I think you guys should see PGR on the Zune HD.
Stunning graphics.

For me the processor speed will come 2nd place to functionality. I have recently started to use the remote desktop on my HD, but wish it had a TV out like my Touch Pro.
I was thinking about upgrading to a Leo but that has no TV also.
Discussing advanced graphics for a Snapdragon is not helpful if you are restricted to 4 inches.
Hopefully HTC will put HDMI or at least video out on all future devices. The resolution of the devices is upto it, so why not.

Related

Which Processor is faster & better

"Intel Bulverde 520 MHz"
The one in the Universal
OR
"Qualcomm MSM7201A 528 Mhz"
in the new HTC HD unit
I feel they are the same. Am I right?
qualcomm is much better
Its similar the difference between a 2.5ghz Pentium 4 and a 2.5ghz Core2Solo
i don't think that core2solo and pentium4 with ht much differ
l2tp said:
i don't think that core2solo and pentium4 with ht much differ
Click to expand...
Click to collapse
Google up "Instructions per second" and you'll understand.
The Netburst architec of P4 is one of the worst example in history of it. A failure by engineering standard.
The PXA270 Processor in the Universal actually runs at 624mhz and is underclocked. The HTC X7500 uses the same CPU running at 624mhz. It is clearly the better CPU.
genetik_freak said:
The PXA270 Processor in the Universal actually runs at 624mhz and is underclocked. The HTC X7500 uses the same CPU running at 624mhz. It is clearly the better CPU.
Click to expand...
Click to collapse
Very, very wrong.
I wouldn't say that the Intel two processors are exactly the same, with one just being underclocked via software. Notice how intel puts out multiple pentiums of a given generation at different speeds? Would you venture to say that all those chips are the same too?
Also, clock speed is a poor metric when comparing chips from different companies. PDADB.Net says that the Intel chip has a ARMv5TE instruction set and the Qualcom chip has a ARMv6 instruction set. The Intel is a generation behind.
Comparing
Wikipedia says
Main article: Megahertz myth
The clock rate of a computer is only useful for providing comparisons between computer chips in the same processor family. An IBM PC with an Intel 486 CPU running at 50 MHz will be about twice as fast as one with the same CPU, memory and display running at 25 MHz, while the same will not be true for MIPS R4000 running at the same clock rate as the two are different processors with different functionality. Furthermore, there are many other factors to consider when comparing the speeds of entire computers, like the clock rate of the computer's front side bus (FSB), the clock rate of the RAM, the width in bits of the CPU's bus and the amount of Level 1, Level 2 and Level 3 cache.
Clock rates should not be used when comparing different computers or different processor families. Rather, some software benchmark should be used. Clock rates can be very misleading since the amount of work different computer chips can do in one cycle varies. For example, RISC CPUs tend to have simpler instructions than CISC CPUs (but higher clock rates), and superscalar processors can execute more than one instruction per cycle (on average), yet it is not uncommon for them to do "less" in a clock cycle. In addition, subscalar CPUs or use of parallelism can also affect the quality of the computer regardless of clock rate.
Click to expand...
Click to collapse
Sonus you are correct about the Mhz comparison. However, the PXA270 in the Universal can be safely "overclocked" to 624Mhz because the chip is designed to max out at that speed.
I would still like to see some benchmark tests between the 624Mhz PXA270, and the 528Mhz Qualcomm MSM7201A.
Generations aside, I can't see the Qualcomm chip outperforming the Intel Chip by much, if any. Also, it should be noted that the PXA270 can be scaled, not sure if that is true for the MSM7201A.
The other catch phrase is "Performance per watt". I bet the MSM7201A has a huge advantage over PXA27x in that, mainly due to newer manufacturing process.
That may be true wuzy, but considering the PXA270 is almost 5 years old and still being used in new devices should tell you plenty about its capabilities and performance.
Not really... It does, however tell a lot about the stinginess of device manufacturers.
As for the overclocking, not every Universal can run 624 MHz without crashing because the CPUs are going through a selection process after manufacturing and there is simply no reason to use the best ones for a device that doesn't need them running at full speed.
The crashes are usually the result of the type of program used to overclock and also the rom. For the most part, people have found that 624mhz is pretty stable, inlcuding myself. Some have even pushed it beyond that speed, but that's another story...
Also take this into consideration:
The Universal has been on the market since 2005, almost 4 years now. By industry standards, it should be obsolete. Why is it not then? Simply, it is quite inexpensive compared to the newer devices having similar features, sometimes less. When it comes to performance vs. price vs. features, you just cannot beat the value of the Universal and its blistering fast 520/624mhz PXA270 CPU! The PXA270's performance is only rivaled by its bigger brother, the 800Mhz PXA320 which has made its way into some newer devices already.
genetik_freak said:
That may be true wuzy, but considering the PXA270 is almost 5 years old and still being used in new devices should tell you plenty about its capabilities and performance.
Click to expand...
Click to collapse
Try out a Diamond/Touch Pro with Opera9.5 the next time you see one and notice the speed difference.
On MSM7201A compared to our PXA27x it's a lot more smoother.
The lack of driver for MSM7200 on a lot of devices released last year tainted our perception on the new generation chips I think.
Touch HD vs. ASUS Galaxy7 at end of the year... hmmm
I think you're missing the point wuzy.
I know there are newer devices out now that can deliver slightly better performance in some areas than the Universal, but considering how old our device is, it is to be expected. All I'm saying is that given the age of the Universal compared to what's out there now, The Universal has held up well. Furthermore, with all the new cooked roms popping up, you can expect the Uni to live even longer!
Take a look at H.264 decompression and real high performance tasks and the PXA270 looses so badly against the PXA320 that it is not even funny anymore...
Why does the Uni keep up with most software? Because most programs are written for the old ARMv4 instruction set, thus wasting a lot of CPU cycles on newer processors that have already moved on. Apart from that the average application simply does not need that much CPU power to begin with.
The Uni held out well in a market that is very slow to adapt new technologies to begin with. The Axim x50v had a dedicated graphics chip at the end of 2004 - how many applications make use of that today? Only some games (ports, emulators) and media players. For those alone the Axim has held out better than the Uni though as it is still one of the best performing PPCs on the market.
Our little one will be around for quite a while, but it is far, far away from what nowadays devices can offer and it shows if you run anything beyond mail and office apps on it.
Which Processor is faster & better
I feel from your input above that "Qualcomm MSM7201A 528 Mhz" has higher performance, clock rete, Instructions per second, & Performance per watt when compared to the "Intel Bulverde 520 MHz" about 2:1 am I right ?
Another Question:
What is the highest speed Processor available for the PDA industry today?
Best Regards.
IMHO the ARM Cortex processors are very far up the ladder when it comes to performance and energy consumption. The Pandora makers claim 10 hours of runtime for their device. Together with its media chip this little bugger is capable of decoding 720p HD video streams (take a look at the Archos 5)
I am not sure if the MSM7201A chipset's CPU alone reaches twice the performance of the Uni, but you will see a huge difference in apps that support and need the latest in CPU architecture (media players & games). If (one way or the other) the 3D capabilities can be put to use you will probably see more than a 2:1 performance boost.
The sad truth is the Universal is one of the slowest VGA devices around. Especially considering lack of the graphical accelerator (which was even present in prototypes).
Too bad the dedicated 3D chip didn't make it into the final design. But it's still better than having a 3D accelerator without drivers! I have a Sharp EM-ONE here with a GoForce 5500 that could theoretically accelerate many video formats. The sad truth is that because there are no drivers no media player can make use of the chip. Even worse: Because the graphics chip still controls the display video is even slower because the optimized X-Scale drivers can't be used. It's like Sharp and NVidia wanted to punish users double So, as bad as it is, the Uni is not the worst device out there!
x86
I wonder why there´re no x86 cpu´s placed in mobile devices yet. maybe because of the high power consumption? x86 cpu´s running at 528mhz would be more powerful than arm cpu´s. furthermore the device could run x86 os like xp embedded with more features and capabilities...
x86 CPU enabled systems are still too much power hungry and too much complicated to be used in such a small device (sounds weird when talking abut HTC Universal, doesn't it).

Dual Core Snapdragon BLOWS NVidia Dual Core tegra 2 OUT OF THE WATER

the new Dual Core Snapdragon makes Nvidia's Tegra 2 look like a single core CPU!
and it's not even out of development yet, so this review is on pre-release hardware (Mobile Development Platform (MDP)) which means it's not even optimized yet!
this is Massively Impressive!
some highlights
Qualcomm Mobile Development Platform (MDP)
SoC 1.5 GHz 45nm MSM8660
CPU Dual Core Snapdragon
GPU Adreno 220
RAM (?) LPDDR2
NAND 8 GB integrated, microSD slot
Cameras 13 MP Rear Facing with Autofocus and LED Flash, Front Facing (? MP)
Display 3.8" WVGA LCD-TFT with Capacitive Touch
Battery 3.3 Whr removable
OS Android 2.3.2 (Gingerbread)
...............................................................................................
the LG 3D
LG Optimus 3D is also a dual core cpu
Dual-core 1GHz ARM Cortex-A9 proccessor, PowerVR SGX540 GPU, TI OMAP4430 chipset
................................................................................................
the LG 2x
LG Optimus 2X is a Dual core cpu
Dual-core 1GHz ARM Cortex-A9 proccessor, ULP GeForce GPU, Tegra 2 chipset
................................................................................................
the Nexus S
Nexus S is a single core cpu
(single core) 1 GHz ARM Cortex-A8 processor, PowerVR SGX540
................................................................................................
GLBenchmark 2.0 Egypt
38 Qualcomm MDP
31 LG 3D
25 LG 2x
21 Nexus S
GLBenchmark 2.0 Pro
94 Qualcomm MDP
55 LG 3D
51 LG 2x
42 Nexus S
Quake 3 FPS (Frames per second)
80 Qualcomm MDP
50 LG 2x
52 Nexus S
N/A LG 3D
Quadrant / 3D / 2D
2851 / 1026 / 329 Qualcomm MDP
2670 / 1196 / 306 LG 2x
1636 / 588 / 309 Nexus S
N/A LG 3D
NOTE: take the Quadrant scores with a grain of Salt
heres what Anand has to say about it
"What all Quadrant is putting emphasis on with its 2D and 3D subtests is something of a mystery to me. There isn't a whole lot of documentation, but again it's become something of a standard. The 1.5 GHz MSM8660 leads in overall score and the 2D subtest, but trails Tegra 2 in the 3D subtest. If you notice the difference between Hummingbird (SGX540) from 2.1 to 2.3, you can see how Quadrant's strange 3D behavior on Android 2.3 seems to continually negatively impact performance. I saw the same odd missing texture and erratic performance back when I tested the Nexus S as I did on the MDP. Things like this and lack of updates are precisely why we need even better testing tools to effectively gauge performance"
Source: Anandtech.com
http://www.anandtech.com/show/4243/...ormance-1-5-ghz-msm8660-adreno-220-benchmarks
Hope u enjoyed this
Ric H. (a1yet)
PS: don't rule out Nvidia yet their dual core may have gotten blown out of the water BUT
will their quad (four) cores CPU AND 12 core Gpu be better ?
NVIDIA's Project Kal-El: Quad-Core A9s Coming to Smartphones/Tablets This Year
Link:
http://www.anandtech.com/show/4181/...re-a9s-coming-to-smartphonestablets-this-year
a1yet said:
PS: don't rule out Nvidia yet their dual core may have gotten blown out of the water BUT
will their 12 core cpu be better ?
Click to expand...
Click to collapse
If you're one of those benchmark nut-riders, at least take some time to understand what it is that you're reading. It's 12-core GPU, big difference from a 12-core CPU, which doesn't even exist on desktop computers yet (unless you're talking about multisocket server-class mobos), let alone on a mobile phone.
And the second point which 99% of the people who tend to lust at the benchmarks don't have a damn clue about, screen size and resolution. But I'm sure you don't care to know much about it, OP.
I don't see the point of benchmarks if they don't tell the real world stories.
not sure about if the information is accurate, however it will be nice to have competition so there is always better cpu coming out.
GREAT cause the ipad is killing tegra 2 already
I think mobile processors are similar to desktop processors. There's just too much going on to accurately benchmark. My OG Droid with a 1.25Ghz overclock doesn't even come close to touching my HTC Thunderbolt on stock, yet technically it's 250Mhz faster, right? The HTC's updated 1Ghz processor is faster than other 1Ghz processors, yet rated at 1Ghz. I don't see logic in all the hype.
lude219 said:
And the second point which 99% of the people who tend to lust at the benchmarks don't have a damn clue about, screen size and resolution. But I'm sure you don't care to know much about it, OP.
Click to expand...
Click to collapse
WELL my "PS:" was added in hast and I Made a typo. My whole post was about "GRAPHICS" performance so the typo did not impact the heart of my post!
sad day for you
because with your 2 brain cells u obviously have NO CLUE what u are talking about. "Screen SIZE" has no bearing on performance ! none, zero, zip, zilch!
talk to me about screen size next time I'm playing Angry Birds on my 52 inch HDTV!
the only thing that has ANY bearing on performance IS "resolution"
so to explain it in a way that u can understand
the only impact screen size has is it sometimes allows you (Depending on how the manufactures implement it) to have a higher ....
WAIT FOR IT ...........
WAIT FOR IT ...........
"Resolution"
WOW SAD Day for you !
Go bash someones post, who can tolerate your Ignorance! and leave mine alone
Sincerely
Ric H. (a1yet)
ngarcesp said:
GREAT cause the ipad is killing tegra 2 already
Click to expand...
Click to collapse
and the ipad 2's processor is made by samsung
Sent from HTC EVO
a1yet said:
WELL my "PS:" was added in hast and I Made a typo. My whole post was about "GRAPHICS" performance so the typo did not impact the heart of my post!
sad day for you
because with your 2 brain cells u obviously have NO CLUE what u are talking about. "Screen SIZE" has no bearing on performance ! none, zero, zip, zilch!
talk to me about screen size next time I'm playing Angry Birds on my 52 inch HDTV!
the only thing that has ANY bearing on performance IS "resolution"
so to explain it in a way that u can understand
the only impact screen size has is it sometimes allows you (Depending on how the manufactures implement it) to have a higher ....
WAIT FOR IT ...........
WAIT FOR IT ...........
"Resolution"
WOW SAD Day for you !
Go bash someones post, who can tolerate your Ignorance! and leave mine alone
Sincerely
Ric H. (a1yet)
Click to expand...
Click to collapse
I like you pal!That's the spirit!
Forget the haters dude,there are many around!
r916 said:
and the ipad 2's processor is made by samsung
Sent from HTC EVO
Click to expand...
Click to collapse
I don't know about it being made by Samsung,but the CPU(the CPU itself,not the whole chip)is larger than the other CPUs,thus having more space for more transistors.That significantly boosts performance.

[Q] Most badass GPU and CPU in da world; Expert Knowedge please :)

I've been doing quite a bit of research on GPU's and CPU's in phone's/tablets lately. And I have a few unanswered questions that I can't seem to find an answer for.
1: What's the best chipset available for mobile phones and tablets right now? This link cleared quite a bit up for me, it does a fairly indepth comparison for both GPU and CPU performance between the Qualcomm S4, Tegra 3, OMAP 4470, and the Exynos 4212. And I dont want the 'Well this is better because it has more jiggahertz". Shut up, that's not what I need. I need something more indepth. If studies on individual GPU comparison can be provided, please drop a link. I'd like to know these things very well.
2: What individual GPU is currently the best? I realize the Ipad3 came out with with a graphics chip that's supposedly superior to the Xbox/PS3's. However I take anything Apple says with a grain of salt, they're notorious for shooting flaming BS out of their rear. However based on the little bit of searching I've done, the Adreno GPU's seem to be ahead of their time. I previously thought the Mali 400 GPU in the Exynos chipset was one of the best, but apparently it's outdated. Again, links to tests/studies/comparisons would be appreciated.
3: What's the deal with the ARM chips? Are the A5's, A6's, A11's, (and whatever other A chips out there are), some standard CPU developed by ARM and licensed out to all manufacturers to use in their chipsets?
4: What alternatives are there to the ARM CPU's? Most chipsets I research seem to be using a Cortex A9 chip.
5: What's the difference between the A5, A6, A9, etc. From what I've seen the higher numbers are the newer models, but I feel like that's a very shallow definition. If that is true, why does the newest iPad only use an A5x chip for it's quad core rather than an A9 or something of the sort.
6: Is the chipset in the iPad really the fastest out there? Personally, I can't really stand apple products; let alone the rabid fanboys and the obnoxious advertisements they put out. I can recognize that they very often gloat about their products and overexaggerate; like how they said the dual core in the iPhone 4s is the fastest out there, yet from what I've read the A5 is the worst performing dual core out there. Is the GPU in the tablet really superior to the Xbox? And is the processor really able to outdo the Tegra 3?
If you're able to answer any one of these, even exclusively, that would be appreciated. I just like knowledge
MultiLockOn said:
I've been doing quite a bit of research on GPU's and CPU's in phone's/tablets lately. And I have a few unanswered questions that I can't seem to find an answer for.
1: What's the best chipset available for mobile phones and tablets right now? This link cleared quite a bit up for me, it does a fairly indepth comparison for both GPU and CPU performance between the Qualcomm S4, Tegra 3, OMAP 4470, and the Exynos 4212. And I dont want the 'Well this is better because it has more jiggahertz". Shut up, that's not what I need. I need something more indepth. If studies on individual GPU comparison can be provided, please drop a link. I'd like to know these things very well.
2: What individual GPU is currently the best? I realize the Ipad3 came out with with a graphics chip that's supposedly superior to the Xbox/PS3's. However I take anything Apple says with a grain of salt, they're notorious for shooting flaming BS out of their rear. However based on the little bit of searching I've done, the Adreno GPU's seem to be ahead of their time. I previously thought the Mali 400 GPU in the Exynos chipset was one of the best, but apparently it's outdated. Again, links to tests/studies/comparisons would be appreciated.
3: What's the deal with the ARM chips? Are the A5's, A6's, A11's, (and whatever other A chips out there are), some standard CPU developed by ARM and licensed out to all manufacturers to use in their chipsets?
4: What alternatives are there to the ARM CPU's? Most chipsets I research seem to be using a Cortex A9 chip.
5: What's the difference between the A5, A6, A9, etc. From what I've seen the higher numbers are the newer models, but I feel like that's a very shallow definition. If that is true, why does the newest iPad only use an A5x chip for it's quad core rather than an A9 or something of the sort.
6: Is the chipset in the iPad really the fastest out there? Personally, I can't really stand apple products; let alone the rabid fanboys and the obnoxious advertisements they put out. I can recognize that they very often gloat about their products and overexaggerate; like how they said the dual core in the iPhone 4s is the fastest out there, yet from what I've read the A5 is the worst performing dual core out there. Is the GPU in the tablet really superior to the Xbox? And is the processor really able to outdo the Tegra 3?
If you're able to answer any one of these, even exclusively, that would be appreciated. I just like knowledge
Click to expand...
Click to collapse
1. Dunno right now, it's always changing. I hear the new Qualcomm processors with the new Andreno gpu are supposed to be the ****, but it's not out yet so who knows. The iPad 3 currently has not had any real world tests done yet, we need to wait for release. It is basically the same A5 chip as the iPad 2 but with the PSVita's gpu thrown in.
2. *sigh* The iPad 3 is not more powerful than an Xbox 360. It is better in I believe one aspect (more memory), but this has very little impact on performance/graphics quality. This is Apple shooting wads of **** out it's arse, or whoever made the claim. It's actually using the same GPU found in the PSVita, which we all know is not as powerful as a PS3/Xbox360. However, the PSVita is also using a quad core cpu, whereas the iPad 3 is using the same dual core A5 as the iPad 2, so technically the PSVita is superior. You also have to consider how many more pixels the gpu has to power on the iPad 3's display. While high res is nice, it takes more power to render it.
3. ARM creates a base chip for companies to slap their own GPU's and name on. The naming structure is pretty self explanatory.
4. All CPU's currently in tablets/cellphones are a variant of the ARM. A Cortex A9 is still an ARM chip. This will soon change when Intel releases their tablet/phone chips.
5. You're right, higher numbers do mean newer modeling. I don't know all the exacts, but with the newer ARM series you get higher and/or more efficient clocks, generally some battery savings, and in some series support for more cores. Apple's labeling of their chips has nothing to do with ARM's, it's their own naming scheme. The A5x is just what Apple calls their version of the ARM processor.
6. I believe atm the iPad 3 has the fastest chipset in a tablet..for now. It won't take long for it to be overtaken by other companies, there's so much in the works right now.
speedyink said:
1. Dunno right now, it's always changing. I hear the new Qualcomm processors with the new Andreno gpu are supposed to be the ****, but it's not out yet so who knows. The iPad 3 currently has not had any real world tests done yet, we need to wait for release. It is basically the same A5 chip as the iPad 2 but with the PSVita's gpu thrown in.
2. *sigh* The iPad 3 gpu is not more powerful than an Xbox 360. It is better in I believe one aspect (more memory), but this has very little impact on performance/graphics quality. This is Apple shooting wads of **** out it's arse, or whoever made the claim. It's actually using the same GPU found in the PSVita, which we all know is not as powerful as a PS3/Xbox360. However, the PSVita is also using a quad core cpu, whereas the iPad 3 is using the same dual core A5 as the iPad 2, so technically the PSVita is superior.
3. ARM creates a base chip for companies to slap their own GPU's and name on. The naming structure is pretty self explanatory.
4. All CPU's currently in tablets/cellphones are a variant of the ARM. A Cortex A9 is still an ARM chip. This will soon change when Intel releases their tablet/phone chips.
5. You're right, higher numbers do mean newer modeling. I don't know all the exacts, but with the newer ARM series you get higher and/or more efficient clocks, generally some battery savings, and in some series support for more cores. Apple's labeling of their chips has nothing to do with ARM's, it's their own naming scheme. The A5x is just what Apple calls their version of the ARM processor.
6. I believe atm the iPad 3 has the fastest chipset in a tablet..for now. It won't take long for it to be overtaken by other companies, there's so much in the works right now.
Click to expand...
Click to collapse
Thanks for the reply. It seems weird to me that Apple would rename a CPU to something as similar to one that would already exist, A5x as to A5.
MultiLockOn said:
Thanks for the reply. It seems weird to me that Apple would rename a CPU to something as similar to one that would already exist, A5x as to A5.
Click to expand...
Click to collapse
Because Apple is the type of company to step on someones feet like that, and then sue them later on for copyright infringement. Damn the confusion, Apple starts with A, so will their processors.
speedyink said:
Because Apple is the type of company to step on someones feet like that, and then sue them later on for copyright infringement. Damn the confusion, Apple starts with A, so will their processors.
Click to expand...
Click to collapse
yeah, apple just simply buy a technology and re-label them, make patent and troll others. so for comparison, apple doesn't count. Also these handheld chipset can't be compared with consoles, consoles have more proccessing power like more RAM bandwidth and polygons.
Anyway.. based on my experience, mali400 exynos has a butterly smooth performance for both UI and 3D graphics. I've tried both Gingerbread GNote and my SGS2.
on the other hand, Google did a great job with TI OMAP for it's Galaxy Nexus, pure HW accelerated 4.0.3.. with very little glitch, but I believe it's software issue.
IMO if you wanna buy a fast and smooth device, follow the current Nexus spec (at least similar) like GNexus, Motorola RAZR, etc. I've seen Tegra 3 4+1 Transformer Prime but never hands-on it. as far as i seen, UI and 3D performance are stunning. 1 extra core advantage is for low power mode when doing light proccessing and standby mode. Today hardwares are fast enough, drivers and OS optimisation are very important thing if you want everything run smoothly.
cmiiw, sorry for bad english
lesp4ul said:
yeah, apple just simply buy a technology and re-label them, make patent and troll others. so for comparison, apple doesn't count. Also these handheld chipset can't be compared with consoles, consoles have more proccessing power like more RAM bandwidth and polygons.
Anyway.. based on my experience, mali400 exynos has a butterly smooth performance for both UI and 3D graphics. I've tried both Gingerbread GNote and my SGS2.
on the other hand, Google did a great job with TI OMAP for it's Galaxy Nexus, pure HW accelerated 4.0.3.. with very little glitch, but I believe it's software issue.
IMO if you wanna buy a fast and smooth device, follow the current Nexus spec (at least similar) like GNexus, Motorola RAZR, etc. I've seen Tegra 3 4+1 Transformer Prime but never hands-on it. as far as i seen, UI and 3D performance are stunning. 1 extra core advantage is for low power mode when doing light proccessing and standby mode. Today hardwares are fast enough, drivers and OS optimisation are very important thing if you want everything run smoothly.
cmiiw, sorry for bad english
Click to expand...
Click to collapse
I kmow what you mean. Im extremely happy with my galaxy s2, I cant say I ever recall it lagging on me in any way whatsoever. Im not sure what makes the droid razr and galaxy nexus comparable to the s2. From what Ive read Omap processors tend to lag and consume battery, and the mali 400 is better than what either of those phones have. Id say its ICS but the razr still
Runs gingerbread
I was hoping for some more attention in here :/
I agree, omaps are battery hungry beast. Like my previous Optimus Black, man... i only got 12-14 hours with edge (1ghz UV smartass v2, also ****ty LG kernel haha). Same issue as my friend's Galaxy SL. I dunno if newer soc has a better behaviour.
Sent from my Nokia 6510 using PaperPlane™

TEGRA 4 - 1st possible GLBenchmark!!!!!!!! - READ ON

Who has been excited by the Tegra 4 rumours?, last night's Nvidia CES announcement was good, but what we really want are cold-hard BENCHMARKS.
I found an interesting mention of Tegra T114 SoC on a Linux Kernel site, which I've never heard of. I got really interested when it stated that the SoC is based on ARM A15 MP, it must be Tegra 4. I checked the background of the person who posted the kernel patch, he is a senior Nvidia Kernel engineer based in Finland.
https://lkml.org/lkml/2012/12/20/99
"This patchset adds initial support for the NVIDIA's new Tegra 114
SoC (T114) based on the ARM Cortex-A15 MP. It has the minimal support
to allow the kernel to boot up into shell console. This can be used as
a basis for adding other device drivers for this SoC. Currently there
are 2 evaluation boards available, "Dalmore" and "Pluto"."
On the off chance I decided to search www.glbenchmark.com for the 2 board names, Dalmore (a tasty whisky!) and Pluto (Planet, Greek God and cartoon dog!) Pluto returned nothing, but Dalmore returned a device called 'Dalmore Dalmore' that was posted on 3rd January 2013. However the OP had already deleted them, but thanks to Google Cache I found the results
RESULTS
GL_VENDOR NVIDIA Corporation
GL_VERSION OpenGL ES 2.0 17.01235
GL_RENDERER NVIDIA Tegra
From System spec, It runs Android 4.2.1, a Min frequency of 51 MHz and Max of 1836 Mhz
Nvidia DALMORE
GLBenchmark 2.5 Egypt HD C24Z16 - Offscreen (1080p) : 32.6 fps
iPad 4
GLBenchmark 2.5 Egypt HD C24Z16 - Offscreen (1080p): 49.6 fps
CONCLUSION
Anandtech has posted that Tegra 4 doesn't use unified shaders, so it's not based on Kepler. I reckon that if Nvidia had a brand new GPU they would have shouted about it at CES, the results I've found indicate that Tegra 4 is between 1 to 3 times faster than Tegra 3.
BUT, this is not 100% guaranteed to be a Tegra 4 system, but the evidence is strong that it is a T4 development board. If this is correct, we have to figure that it is running beta drivers, Nexus 10 is ~ 10% faster than the Arndale dev board with the same Exynos 5250 SoC. Even if Tegra 4 gets better drivers, it seems like the SGX 544 MP4 in the A6X is still the faster GPU, with Tegra 4 and Mali T604 being an almost equal 2nd. Nvidia has said that T4 is faster than A6X, but the devil is in the detail, in CPU benchmarks I can see that being true, but not for graphics.
UPDATE - Just to add to the feeling that that this legit, the GLBenchmark - System section lists the "android.os.Build.USER" as buildbrain. Buildbrain according to a Nvidia job posting is "Buildbrain is a mission-critical, multi-tier distributed computing system that performs mobile builds and automated tests each day, enabling NVIDIA's high performance development teams across the globe to develop and deliver NVIDIA's mobile product line"
http://jobsearch.naukri.com/job-lis...INEER-Nvidia-Corporation--2-to-4-130812500024
I posted the webcache links to GLBenchmark pages below, if they disappear from cache, I've saved a copy of the webpages, which I can upload, Enjoy
GL BENCHMARK - High Level
http://webcache.googleusercontent.c...p?D=Dalmore+Dalmore+&cd=1&hl=en&ct=clnk&gl=uk
GL BENCHMARK - Low Level
http://webcache.googleusercontent.c...e&testgroup=lowlevel&cd=1&hl=en&ct=clnk&gl=uk
GL BENCHMARK - GL CONFIG
http://webcache.googleusercontent.c...Dalmore&testgroup=gl&cd=1&hl=en&ct=clnk&gl=uk
GL BENCHMARK - EGL CONFIG
http://webcache.googleusercontent.c...almore&testgroup=egl&cd=1&hl=en&ct=clnk&gl=uk
GL BENCHMARK - SYSTEM
http://webcache.googleusercontent.c...ore&testgroup=system&cd=1&hl=en&ct=clnk&gl=uk
OFFSCREEN RESULTS
http://webcache.googleusercontent.c...enchmark.com+dalmore&cd=4&hl=en&ct=clnk&gl=uk
http://www.anandtech.com/show/6550/...00-5th-core-is-a15-28nm-hpm-ue-category-3-lte
Is there any Gpu that could outperform iPad4 before iPad5 comes out? adreno 320, t Mali 604 now tegra 4 aren't near it. Qualcomm won't release anything till q4 I guess, and tegra 4 has released too only thing that is left is I guess is t Mali 658 coming with exynos 5450 (doubtfully when it would release, not sure it will be better )
Looks like apple will hold the crown in future too .
i9100g user said:
Is there any Gpu that could outperform iPad4 before iPad5 comes out? adreno 320, t Mali 604 now tegra 4 aren't near it. Qualcomm won't release anything till q4 I guess, and tegra 4 has released too only thing that is left is I guess is t Mali 658 coming with exynos 5450 (doubtfully when it would release, not sure it will be better )
Looks like apple will hold the crown in future too .
Click to expand...
Click to collapse
There was a great article on Anandtech that tested the power consumption of the Nexus 10's Exynos 5250 SoC, it showed that both the CPU and GPU had a TDP of 4W, making a theoretical SoC TDP of 8W. However when the GPU was being stressed by running a game, they ran a CPU benchmark in the background, the SoC quickly went up to 8W, but the CPU was quickly throttled from 1.7 GHz to just 800 Mhz as the system tried to keep everything at 4W or below, this explained why the Nexus 10 didn't benchmark as well as we wished.
Back to the 5450 which should beat the A6X, trouble is it has double the CPU & GPU cores of the 5250 and is clocked higher, even on a more advanced 28nm process, which will lower power consumption I feel that system will often be throttled because of power and maybe heat concerns, so it looks amazing on paper but may disappoint in reality, and a 5450 in smartphone is going to suffer even more.
So why does Apple have an advantage?, well basically money, for a start mapple fans will pay more for their devices, so they afford to design a big SoC and big batteries that may not be profitable to other companies. Tegra 4 is listed as a 80mm2 chip, iPhone 5 is 96mm2 and A6X is 123mm2, Apple can pack more transistors and reap the GPU performance lead, also they chosen graphics supplier Imagination Technologies have excellent products, Power VR Rogue will only increase Apple's GPU lead. They now have their own chip design team, the benefit for them has been their Swift core is almost as powerful as ARM A15, but seems less power hungry, anyway Apple seems to be happy running slower CPUs compared to Android. Until an Android or WP8 or somebody can achieve Apple's margins they will be able to 'buy' their way to GPU domination, as an Android fan it makes me sad:crying:
32fps is no go...lets hope it's not final
hamdir said:
32fps is no go...lets hope it's not final
Click to expand...
Click to collapse
It needs to, but it will be OK for a new Nexus 7
still faster enough for me, I dont game alot on my nexus 7.
I know I'm taking about phones here ... But the iPhone 5 GPU and adreno 320 are very closely matched
Sent from my Nexus 4 using Tapatalk 2
italia0101 said:
I know I'm taking about phones here ... But the iPhone 5 GPU and adreno 320 are very closely matched
Sent from my Nexus 4 using Tapatalk 2
Click to expand...
Click to collapse
From what I remember the iPhone 5 and the new iPad wiped the floor with Nexus 4 and 10. The ST-Ericsson Nova A9600 is likely to have a PowerVR Rogue GPU. Just can't wait!!
adityak28 said:
From what I remember the iPhone 5 and the new iPad wiped the floor with Nexus 4 and 10. The ST-Ericsson Nova A9600 is likely to have a PowerVR Rogue GPU. Just can't wait!!
Click to expand...
Click to collapse
That isn't true , check glbenchmark , in the off screen test the iPhone scored 91 , the nexus 4 scored 88 ... That ksnt wiping my floors
Sent from my Nexus 10 using Tapatalk HD
Its interesting how even though nvidia chips arent the best we still get the best game graphics because of superior optimization through tegra zone. Not even the a6x is as fully optimized.
Sent from my SAMSUNG-SGH-I727 using xda premium
ian1 said:
Its interesting how even though nvidia chips arent the best we still get the best game graphics because of superior optimization through tegra zone. Not even the a6x is as fully optimized.
Sent from my SAMSUNG-SGH-I727 using xda premium
Click to expand...
Click to collapse
What sort of 'optimisation' do you mean? un optimised games lag that's a big letdown and tegra effects can also be used on other phones too with chain fire 3d I use it and tegra games work without lag with effects and I don't have a tegra device
With a tegra device I am restricted to optimised games mostly
The graphic performance of NVIDIA SoCs is always disappointed, sadly for the VGA dominanting provider on the world.
The first Tegra2, the GPU is a little bit better than SGX540 of GalaxyS a little bit in benchmark, but lacking NEON support.
The second one Tegra 3, the GPU is nearly the same as the old Mali400Mp4 in GALAXY S2/Original Note.
And now it's better but still nothing special and outperformed soon (Adreno 330 and next-gen Mali)
Strongest PowerVR GPUs are always the best, but sadly they are exclusive for Apple only (SGX543 and maybe SGX 554 also, only Sony ,who has the cross-licencing with Apple, has it in PS Vita and in PS Vita only)
tegra optimization porting no longer works using chainfire, this is now a myth
did u manage to try shadowgun thd, zombie driver or horn? the answer is no, games that use t3 sdk for physx and other cpu graphics works can not be forced to work on other devices, equally chainfire is now outdated and no longer updated
now about power vr they are only better in real multicore configuration which is only used by apple and Sony's vita, eating large die area, ie actual multicore each with its own subcores/shaders, if tegra was used in real multi core it would destroy all
finally this is really funny all this doom n gloom because of an early discarded development board benchmark, I dont mean to take away from turbo's thunder and his find but truly its ridiculous the amount of negativity its is collecting before any type of final device benchs
adrena 220 doubled in performance after the ICS update on sensation
t3 doubled the speed of t2 gpu with only 50% the number of shaders so how on earth do you believe only 2x the t3 scores with 600% more shaders!!
do you have any idea how miserable the ps3 performed in its early days? even new desktop GeForces perform much less than expected until the drivers are updated
enough with the FUD! seems this board is full of it nowadays and so little reasoning...
For goodness sake, this isn't final hardware, anything could change. Hung2900 knows nothing, what he stated isn't true. Samsung has licensed PowerVR, it isn't just stuck to Apple, just that Samsung prefers using ARMs GPU solution. Another thing I dislike is how everyone is comparing a GPU in the iPad 4 (SGX554MP4) that will NEVER arrive in a phone compared a Tegra 4 which will arrive in a phone. If you check OP link the benchmark was posted on the 3rd of January with different results (18fps then 33fps), so there is a chance it'll rival the iPad 4. I love Tegra as Nvidia is pushing developers to make more better games for Android compared to the 'geeks' *cough* who prefers benchmark results, whats the point of having a powerful GPU if the OEM isn't pushing developers to create enhance effect games for there chip.
Hamdir is correct about the GPUs, if Tegra 3 was around 50-80% faster than Tegra 2 with just 4 more cores, I can't really imagine it only being 2x faster than Tegra 3. Plus its a 28nm (at around 80mm2 just a bit bigger than Tegra 3, smaller than A6 90mm2) along with the dual memory than single on Tegra 2/3.
Turbotab said:
There was a great article on Anandtech that tested the power consumption of the Nexus 10's Exynos 5250 SoC, it showed that both the CPU and GPU had a TDP of 4W, making a theoretical SoC TDP of 8W. However when the GPU was being stressed by running a game, they ran a CPU benchmark in the background, the SoC quickly went up to 8W, but the CPU was quickly throttled from 1.7 GHz to just 800 Mhz as the system tried to keep everything at 4W or below, this explained why the Nexus 10 didn't benchmark as well as we wished.
Back to the 5450 which should beat the A6X, trouble is it has double the CPU & GPU cores of the 5250 and is clocked higher, even on a more advanced 28nm process, which will lower power consumption I feel that system will often be throttled because of power and maybe heat concerns, so it looks amazing on paper but may disappoint in reality, and a 5450 in smartphone is going to suffer even more.
So why does Apple have an advantage?, well basically money, for a start iSheep will pay more for their devices, so they afford to design a big SoC and big batteries that may not be profitable to other companies. Tegra 4 is listed as a 80mm2 chip, iPhone 5 is 96mm2 and A6X is 123mm2, Apple can pack more transistors and reap the GPU performance lead, also they chosen graphics supplier Imagination Technologies have excellent products, Power VR Rogue will only increase Apple's GPU lead. They now have their own chip design team, the benefit for them has been their Swift core is almost as powerful as ARM A15, but seems less power hungry, anyway Apple seems to be happy running slower CPUs compared to Android. Until an Android or WP8 or somebody can achieve Apple's margins they will be able to 'buy' their way to GPU domination, as an Android fan it makes me sad:crying:
Click to expand...
Click to collapse
Well said mate!
I can understand what you feel, nowdays android players like samsung,nvidia are focusing more on CPU than GPU.
If they won't stop soon and continued to use this strategy they will fail.
GPU will become bottleneck and you will not be able use the cpu at its full potential. (Atleast when gaming)
i have Galaxy S2 exynos 4 1.2Ghz and 400mhz oc mali gpu
In my analysis most modern games like MC4,NFS:MW aren't running at 60FPS at all thats because GPU always have 100% workload and CPU is relaxing there by outputing 50-70% of total CPU workload
I know some games aren't optimize for all android devices as opposed to apple devices but still even high-end android devices has slower gpu (than ipad 4 atleast )
AFAIK, Galaxy SIV is likely to pack T-604 with some tweaks instead of mighty T-658 which is still slower than iPAddle 4
Turbotab said:
There was a great article on Anandtech that tested the power consumption of the Nexus 10's Exynos 5250 SoC, it showed that both the CPU and GPU had a TDP of 4W, making a theoretical SoC TDP of 8W. However when the GPU was being stressed by running a game, they ran a CPU benchmark in the background, the SoC quickly went up to 8W, but the CPU was quickly throttled from 1.7 GHz to just 800 Mhz as the system tried to keep everything at 4W or below, this explained why the Nexus 10 didn't benchmark as well as we wished.
Back to the 5450 which should beat the A6X, trouble is it has double the CPU & GPU cores of the 5250 and is clocked higher, even on a more advanced 28nm process, which will lower power consumption I feel that system will often be throttled because of power and maybe heat concerns, so it looks amazing on paper but may disappoint in reality, and a 5450 in smartphone is going to suffer even more.
So why does Apple have an advantage?, well basically money, for a start iSheep will pay more for their devices, so they afford to design a big SoC and big batteries that may not be profitable to other companies. Tegra 4 is listed as a 80mm2 chip, iPhone 5 is 96mm2 and A6X is 123mm2, Apple can pack more transistors and reap the GPU performance lead, also they chosen graphics supplier Imagination Technologies have excellent products, Power VR Rogue will only increase Apple's GPU lead. They now have their own chip design team, the benefit for them has been their Swift core is almost as powerful as ARM A15, but seems less power hungry, anyway Apple seems to be happy running slower CPUs compared to Android. Until an Android or WP8 or somebody can achieve Apple's margins they will be able to 'buy' their way to GPU domination, as an Android fan it makes me sad:crying:
Click to expand...
Click to collapse
Typical "isheep" reference, unnecessary.
Why does apple have the advantage? Maybe because there semiconductor team is talented and can tie the A6X+PowerVR GPU efficiently. NIVIDA should have focused more on GPU in my opinion as the CPU was already good enough. With these tablets pushing excess of 250+ppi the graphics processor will play a huge role. They put 72 cores in there processor. Excellent. Will the chip ever be optimized to full potential? No. So again they demonstrated a product that sounds good on paper but real world performance might be a different story.
MrPhilo said:
For goodness sake, this isn't final hardware, anything could change. Hung2900 knows nothing, what he stated isn't true. Samsung has licensed PowerVR, it isn't just stuck to Apple, just that Samsung prefers using ARMs GPU solution. Another thing I dislike is how everyone is comparing a GPU in the iPad 4 (SGX554MP4) that will NEVER arrive in a phone compared a Tegra 4 which will arrive in a phone. If you check OP link the benchmark was posted on the 3rd of January with different results (18fps then 33fps), so there is a chance it'll rival the iPad 4. I love Tegra as Nvidia is pushing developers to make more better games for Android compared to the 'geeks' *cough* who prefers benchmark results, whats the point of having a powerful GPU if the OEM isn't pushing developers to create enhance effect games for there chip.
Hamdir is correct about the GPUs, if Tegra 3 was around 50-80% faster than Tegra 2 with just 4 more cores, I can't really imagine it only being 2x faster than Tegra 3. Plus its a 28nm (at around 80mm2 just a bit bigger than Tegra 3, smaller than A6 90mm2) along with the dual memory than single on Tegra 2/3.
Click to expand...
Click to collapse
Firstly please keep it civil, don't go around saying that people know nothing, people's posts always speak volumes. Also calling people geeks, on XDA is that even an insult, next you're be asking what I deadlift:laugh:
My OP was done in the spirit of technical curiosity, and to counter the typical unrealistic expectations of a new product on mainstream sites, e.g. Nvidia will use Kepler tech (which was false), omg Kepler is like GTX 680, Tegra 4 will own the world, people forget that we are still talking about device that can only use a few watts, and must be passively cooled and not a 200+ watt, dual-fan GPU, even though they both now have to power similar resolutions, which is mental.
I both agree and disagree with your view on Nvidia's developer relationship, THD games do look nice, I compared Infinity Blade 2 on iOS vs Dead Trigger 2 on youtube, and Dead Trigger 2 just looked richer, more particle & physics effects, although IF Blade looked sharper at iPad 4 native resolution, one of the few titles to use the A6x's GPU fully.The downside to this relationship is the further fragmentation of the Android ecosystem, as Chainfire's app showed most of the extra effects can run on non Tegra devices.
Now, a 6 times increase in shader, does not automatically mean that games / benchmarks will scale in linear fashion, as other factors such as TMU /ROP throughput can bottleneck performance. Nvidia's Technical Marketing Manager, when interviewed at CES, said that the overall improvement in games / benchmarks will be around 3 to 4 times T3. Ultimately I hope to see Tegra 4 in a new Nexus 7, and if these benchmarks are proved accurate, it wouldn't stop me buying. Overall including the CPU, it would be a massive upgrade over the current N7, all in the space of a year.
At 50 seconds onwards.
https://www.youtube.com/watch?v=iC7A5AmTPi0
iOSecure said:
Typical "isheep" reference, unnecessary.
Why does apple have the advantage? Maybe because there semiconductor team is talented and can tie the A6X+PowerVR GPU efficiently. NIVIDA should have focused more on GPU in my opinion as the CPU was already good enough. With these tablets pushing excess of 250+ppi the graphics processor will play a huge role. They put 72 cores in there processor. Excellent. Will the chip ever be optimized to full potential? No. So again they demonstrated a product that sounds good on paper but real world performance might be a different story.
Click to expand...
Click to collapse
Sorry Steve, this is an Android forum, or where you too busy buffing the scratches out of your iPhone 5 to notice? I have full respect for the talents of Apple's engineers & marketing department, many of its users less so.
hamdir said:
tegra optimization porting no longer works using chainfire, this is now a myth
did u manage to try shadowgun thd, zombie driver or horn? the answer is no, games that use t3 sdk for physx and other cpu graphics works can not be forced to work on other devices, equally chainfire is now outdated and no longer updated
Click to expand...
Click to collapse
Looks like they haven't updated chain fire 3d for a while as a result only t3 games don't work but others do work rip tide gp, dead trigger etc . It's not a myth but it is outdated and only works with ics and tegra 2 compatible games . I think I (might be) unfortunate too but some gameloft games lagged on tegra device that i had, though root solved it too an extent
I am not saying something is superior to something just that my personal experience I might be wrong I may not be
Tbh I think benchmarks don't matter much unless you see some difference in real world usage and I had that problem with tegra in my experience
But we will have to see if the final version is able to push it above Mali t 604 and more importantly sgx544
Turbotab said:
Sorry Steve, this is an Android forum, or where you too busy buffing the scratches out of your iPhone 5 to notice? I have full respect for the talents of Apple's engineers & marketing department, many of its users less so.
Click to expand...
Click to collapse
No I actually own a nexus 4 and ipad mini so I'm pretty neutral in googles/apples ecosystems and not wiping any scratches off my devices.

[INFO]Processor 101

New processors come out everyday and you are like oh my god which one do I buy which one??
Well here the answer to all your processor related queries!!
Qualcomm Snapdragon​
Qualcomm continues to do what Qualcomm does best – produce a range of high quality chips with everything that handset manufactures need already built in. This time last quarter, we were taking our first look at the upcoming Snapdragon 600 processors which would be replacing the older S4 Pro, another incredibly popular Qualcomm processor.
Qualcomm doesn’t use the exact specification for the Cortex A15, it licenses the architecture from ARM which it then implements into its own Krait CPU cores, the newest version of which, the Krait 300, has shown up in the new Snapdragon 600 SoC...
Since then, a range of handsets powered by Qualcomm’s newest chips have appeared on the market, the flagship Samsung Galaxy S4 and HTC One being the two most notable models which are both some of the best performing smartphones on the market. Performance wise, the Snapdragon 600 has proven to be a decent enough jump up from the previous generation, performing well in most benchmark tests.
We’ve also started to hear about a few devices featuring the lower end Snapdragon 400 and 200 chips, with a range of entry level processors using various ARM architectures heading to the market in the near future. So far this year high end smartphones have received the biggest performance improvements, but these new chips should give the midrange a much needed boost later in the year.
So whilst Snapdragon 600 is certainly the most popular high-end chip on the market right now, we’ve already started to see our first snippets at Qualcomm’s next big thing, the Snapdragon 800.
Click to expand...
Click to collapse
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
There’s been lots of official and unofficial data floating around over the past few months regarding this new chip, and from, what we can tell, it looks to be one powerful piece of tech. Qualcomm demoed some of the new chip’s improved 3D performance earlier in the year, and more recently we’ve seen a few benchmarks popping up for new devices, which place the Snapdragon 800 at the top of the benchmark scores come it’s release.
First, there was the Pantech IM-A880 smartphone, which scored an impressive 30133 in the popular Antutu benchmark, followed by the rumoured beefed up version of the Galaxy S4, and most recently the new Xperia Z Ultra which pulled in the most impressive score yet, a whopping 32173. We’ve also seen some more official looking benchmarks from AnandTech and Engadget which confirm the Antutu scores of above and around 30,000, and also gives us a good look at how the chip performs in a range of other tests. The conclusion — it’s a bit of a beast.
These notable benchmarks scores are no doubt down to the new higher clocked Krait 400 CPU cores and the new Adreno 330 GPU, which is supposed to offer around a 50% performance improvement over the already quick Adreno 320. The test results we’ve seen have shown that the Snapdragon 800s CPU is fine compared with the current crop of processors, but the chip really shines through when it comes to GPU performance, which has proven to be even quicker than the Tegra 4 and iPad 4 chips.
We’ve already seen that Qualcomm is taking graphics extra seriously with its latest chip, as the Snapdragon 800 became the first processor to receive OpenGL ES 3 certification and is compliant with all the big graphics APIs.
Quite a few upcoming top of the line handsets are rumored to be utilizing Qualcomm’s latest processor, including the Galaxy S4 LTE-A, Oppo Find 7, and an Xperia Z refresh as well, so the Snapdragon 800 is perhaps the biggest chip to look out for in the coming months
Click to expand...
Click to collapse
Exynos 5 Octa​
Moving away from Qualcomm, there was certainly a lot of hype surrounding Samsung’s octo-core monster of a processor. Upon release, the chip mostly lived up to expectations — the Exynos version of the Galaxy S4 topped our performance charts and is currently the fastest handset on the market. The SoC is the first to utilize the new big.LITTLE architecture, with four new Cortex A15 cores to provide top of the line peak performance, and four older low power Cortex A7s to keep idle and low performance power consumption to a minimum.
The chip is certainly one of the best when it comes to peak performance, but it has had its share of troubled when it comes to balancing power consumption and performance. If you’re in the market for the fastest smartphone currently around, then the Galaxy S4 is the one to pick right now, providing that it’s available in your region. It has the fastest CPU currently on the market, and its PowerVR SGX544 tri-core GPU matches that of the latest iPad. But with the Snapdragon 800 just around the corner, there could soon be a new processor sitting on the performance throne.
Looking forward, it’s difficult to see the Exynos retaining its top spot for much longer. Other companies are starting to look beyond the power-hungry Cortex A15 architecture, but Samsung hasn’t yet unveiled any new plans.
Click to expand...
Click to collapse
Intel Clover Trail+ and Baytrail​
Speaking of which, perhaps the biggest mover this year has been Intel, and although the company still isn’t competing with ARM in terms of the number of design wins, Intel has finally show off some products which will pose a threat to ARM’s market dominance.
Although we’ve been hearing about Clover Trail+ since last year, the chip is now moving into full swing, with a few handsets arriving which are running the chip, and some of the benchmarks we’ve seen are really quite impressive. Clover Trail+ has managed to find the right balance between performance and power consumption, unlike previous Atom chips which been far too slow to keep up with the top of the line ARM-based processors.
Then there’s Baytrail. Back at Mobile World Congress earlier in the year, Intel laid out its plans for its Clover Trail+, but we’ve already heard information about the processor’s successor. Intel claims that its new Silvermont cores will further improve on both energy efficiency and peak performance. It sounds great on paper, but we always have to take these unveilings with a pinch of salt. What we are most likely looking at with Baytrail is a decent performance improvement, which should keep the processor ahead of the current Cortex A15 powered handsets in the benchmarks, but energy improvements are likely to come in the form of idle power consumption and low power states, rather than saving energy at the peak performance levels
Click to expand...
Click to collapse
But Intel isn’t just interested in breaking into the smartphone and tablet markets with its new line-up of processors. The company is still very much focused on producing chips for laptops. One particularly interesting prospect is the confirmed new generation of Android based netbooks and laptops powered by more robust Intel processors, which could give Microsoft a real run for their money.
Intel has clarified that it will also be assigning the additional Pentium and Celeron titles to its upcoming Silvermont architecture as well as using it in the new BayTrail mobile chips. What this potentially means is a further blurring of the line between tablets and laptops, where the same processor technology will be powering a range of Intel based products. I’m expecting the performance rankings to go from Baytrail for phones and tablets, to Celeron for notebooks, and Pentium chips for small laptops, but this naming strategy hasn’t been confirmed yet. It’s also interesting to see where this will stack up with Intel’s newly released Haswell architecture, which is also aimed at providing power efficient solutions to laptops.
Taking all that into consideration, Baytrail has the potential to be a big game changer for Intel, as it could stand out well ahead of Samsung’s top of the line Exynos chips and will certainly rival the upcoming Qualcomm Snapdragon 800 processor. But we’ll be waiting until the end of the year before we can finally see what the chip can do. In the meantime, we’ll look forward to seeing if Clover Trail+ can finally win over some market share.
Click to expand...
Click to collapse
Nvidia Tegra 4 and 4i​
Nvidia, on the other hand, has had a much more subdued second quarter of the year. We already had many of the unveilings for its new Tegra 4 and Tegra 4i designs by the start of the year, and so far, no products have launched which are making use of Nvidia’s latest chips.
But we have seen quite a bit about Nvidia Shield, which will be powered by the new Tegra 4 chip, and it certainly looks to be a decent piece of hardware. There have also been some benchmarks floating around suggesting that the Tegra 4 is going to significantly outpace other Cortex A15 powered chips, but, without a significant boost in clock speeds, I doubt that the chip will be much faster regarding most applications.
Nvidia’s real strength obviously lies in its graphics technology, and the Tegra 4 certainly has that in spades. Nvidia, much like Qualcomm, has focused on making its new graphics chip compatible with all the new APIs, like OpenGL ES 3.0 and DirectX 11, which will allow the chip to make use of improved graphical features when gaming. But it’s unclear as to whether that will be enough to win over manufactures or consumers.
The Tegra 4i has been similarly muted, without any handsets yet confirmed to be using the chip and we haven’t really heard much about performance either. We already know that the Tegra 4i certainly isn’t aiming to compete with top of the line chips, as it’s only the older Cortex A9s in its quad-core, but with other processors already offering LTE integration, it’s tough to see smartphone manufactures leaping at Nvidia’s chip.
The Tegra 4 is set for release at the end of this quarter, with the Tegra 4i following later in the year. But such a delayed launch may see Nvidia risk missing the boat on this generation of processors as well, which may have something to do with Nvidia’s biggest announcement so far this year – its plan to license its GPU architecture.
This change in direction has the potential to turn Nvidia into the ARM of the mobile GPU market, allowing competing SoC manufacturers, like Samsung and Qualcomm, to use Nvidia’s graphics technology in their own SoCs. However, this will place the company in direct competition with the Mali GPUs from ARM and PowerVR GPUs from Imagination, so Nvidia’s Kepler GPUs will have shine through the competition. But considering the problems that the company had persuading handset manufacturers to adopt its Tegra 3 SoCs, this seems like a more flexible and potentially very lucrative backup plan rather than spending more time and money producing its own chips.
Click to expand...
Click to collapse
MediaTek Quad-cores​
But it’s not just the big powerhouse chip manufactures that have been introducing some new tech. MediaTek, known for its cheap lower performance processors, has recently announced a new quad-core chip named the MT8125, which will be targeted for use in tablets.
The new processor is built from four in-order ARM Cortex A7 cores clocked at 1.5Ghz, meaning that it’s not going to be an absolute powerhouse when it comes to processing capabilities. The SoC will also be making use of a PowerVR 5ZT series graphics chip, which will give it sufficient grunt when it comes to media applications as well, with support for full HD 1080p video playback and recording, as well as some power when it comes to games.
MediaTek chip
A fair bit has changed in the mobile processor space since we last took a look at the market earlier in the year. Here’s a round-up of all the mobile processor news for the second quarter of the year.
MediaTek is also taking a leaf out of Qualcomm’s book by designing the SoC to be an all in one solution. It will come with built in WiFi, Bluetooth, GPS and FM ratio units, and will also be available in three versions, for built-in HSPA+, 2G, or WiFi only variants. This should make the chip an ideal candidate for emerging market devices, as well as budget products in the higher-end markets.
Despite the quad-core CPU and modern graphics chip, the MT8125 is still aimed at being a power efficient solution for midrange and more budget oriented products. But thanks to improvements in mobile technologies and the falling costs of older components, this chip will still have enough juice to power through the most commonly used applications.
Early last month, MediaTek also announced that it has been working on its own big.LITTLE architecture, similar to that found in the Samsung Exynos 5 Octa. But rather than being an eight core powerhouse, MediaTek’s chip will just be making use of four cores in total.
The chip will be known as the MT8135 and will be slightly more powerful that the budget quad-core MT8125, as it will be using two faster Cortex A15 cores. These power hungry units will be backed up by two low power Cortex A7 cores, so it’s virtually the same configuration as the Exynos 5 Octa but in a 2-by-2 layout (2 A15s and 2 A7s) rather than 4-by-4 (4 A15s and 4 A7s).
But in typical MediaTek fashion, the company has opted to down clock the processor in order to make the chip more energy efficient, which is probably a good thing considering that budget devices tend to ship with smaller batteries. The processor will peak at just 1Ghz, which isn’t super slow, but it is nearly half the speed of the A15s found in the Galaxy S4. But performance isn’t everything, and I’m more than happy to see a company pursue energy efficiency over clock speed and number of cores for once, especially if it brings big.LITTLE to some cheaper products.
Click to expand...
Click to collapse
Looking to the future​
ARM Cortex A57​
If you fancy a look even further ahead into the future, then we have also received a little bit of news regarding ARM’s successor to the A15, the all new Cortex A57. This new top of the line chip recently reached the “tape out” stage of development, but it’s still a way off from being released in any mobile products.
Cortex A50 performance chart
The Cortex A50 series is set to offer a significant performance improvement. Hopefully the big.LITTLE architecture will help balance out the power consumption.
ARM has hinted that its new chip can offer up to triple the performance of the current top of the line Cortex-A15 for the same amount of battery consumption. The new Cortex-A57 will also supposedly offer five times the amount of battery life when running at the same speed as its current chips, which sounds ridiculously impressive.
We heard a while back that AMD was working on a Cortex A57/A53 big.LITTLE processor chip as well, which should offer an even better balance of performance and energy efficiency than the current Exynos 5 Octa. But we’ll probably be waiting until sometime in 2014 before we can get our hands on these chips.​
The age of x64​
Speaking of ARM’s next line-up of processors, another important feature to pay attention to will be the inclusion of 64 bit processing technology and the new ARMv8 architecture. ARM’s new Cortex-A50 processor series will take advantage of 64 bit processing in order to improve the performance in more demanding scenarios, reduce power consumption, and take advantage of larger memory addresses for improved performance.
We’ve already seen a few mobile memory manufactures talk about production of high speed 4GB RAM chips, which can only be made use of with larger 64 bit memory addresses. With tablets and smartphones both in pursuit of ever higher levels of performance, x64 supported processors seem like a logical step.
So there you have it, I think that’s pretty much all of the big processor news over the past 3 months. Is there anything in particularly which has caught your eye, are you holding out for a device with a brand new SoC, or are the current crop of processors already plenty good enough for your mobile needs?
Click to expand...
Click to collapse
Reserved
Great thread, Again.:good:
This is better suited for the general General forum. But good job anyway.
Good job, mate!
Nicely written. I enjoyed reading that.
Sent from my GT-I9500 using Tapatalk 4 Beta
Well done. Good read :thumbup:
TEAM MiK
MikROMs Since 3/13/11

Categories

Resources