ive done so many benchmarks on htc sensation nd gs2..but i still find the gs2 win by a huge margin...but on the other hand..the anandtech benchmark on 1.5ghz msm 8660 showed that it beats all the other gpu's such as mali 400 with its adreno 220...but still y cant it cant beat up the gs2? what is the main reason for this?? and is the msm 8660 on cdma evo 3d is jus d same as 8260 except for cdma connectivity? or it has its differences?? will there b a better benchmark on msm8260 anywhere soon?? even the latest smartbench 2011 which supports dual cores also lists the 8260 at lower benchmark!!! sum1 plz explain,,,...and y msm 8660 is way powerfull than exynos though it shows low benchmarks??
The Exynos is IMO an overall better processor than any Qualcomms. The Exynos is a ARM Cortex-A9 processor while the Sensation's is from what I understand similar to ARM Cortex-A8 architecture. The only advantages the Sensation's processor has over the GS2's are on paper: the Adreno 220 GPU is supposedly better, and the Sensation has asynchronous cores, which the GS2 doesn't have. Otherwise, I don't really know what makes the Exynos better, and I also don't know what the difference is between the Sensation and Evo 3D units.
the 8660 is the CDMA version of the 8260
I don't put alot of stock into benchmarks, and very few are accurate because they can't truly read dual cores.
To the two noobs in post 1 and post 2
The Sensation processor really isnt inferior compared to the SGS2
Why do i say that, because your sensation at this moment in your time in your hand is probably ONLY using one core. the 2nd core will be in an idle state and only activate when you need it.
THE SGS2 no matter what u do, both cores will work together, even when your just looking at touchwiz you are using two cores, even with the screen off in your pocket your using two cores, there is no way to turn one off, thus consumes a little more battery.
With the sensation the 2nd core kicks in when you need the power. they do not work together. Currently if you are looking at the dev thread in the development section you will see the progress.
SImpler term
think of a turbocharger
SGS2 uses two turbos continuously,
Sensation uses one turbo but when it needs more power the 2nd turbo kicks in
I'm well aware that no one has really been able to push the Sensation's processor to what it's capable of. That said, I have done my research on the SGS2 and Sensation processors and still believe that the Exynos is superior to the Qualcomm. I am excited to see what the Sensation can do when we can explore its power, but I still think that it will fall a little short of the Exynos because of the similar to A8 vs. A9 architecture. My prediction is that once proper dual-core support comes for us, we will easily be able to get better performance than the SGS2 has stock, but when it comes to fully modded out Sensation vs. fully modded out SGS2, the SGS2 will still be faster.
To use another car analogy, it's like saying "my car isn't inferior" just because if you throw on some bolt-ons you will be able to get marginally better performance than a stock competitor. But, if both cars were to be fully modded (bolt-ons, FI'd, proper custom engine management, better rubber, etc.), the other car would pull ahead. Essentially, I think that the ultimate the SGS2 can achieve is greater than the ultimate that the Sensation can achieve (but it won't be by much once proper support comes out).
Exynos's A9 has shorter pipelines and is fully out of order and the Sensation's Adreno, despite being faster, has to render at a higher resolution.
Sure, the Qualcomm may win some synthetic benchmarks, but the A9 is still faster due to a better architecture. Same way how K8 was better than Netburst despite the latter having higher clocks, cache, etc, but its deep pipeline was ultimately one of its bigger flaws.
The A9 will always have a 20-25% performance benefit over the A8 if they are running at the same clock speed. The Scorpion architecture is based off of the A8, but it also has some A9 elements because there's so such thing as a dual-core A8 processor. The performance of the Scorpion is somewhere in between an A8 and an A9 because of this. The Exynos at its current state inside of the GSII is clocked at the same speed as the MSM8x60 inside of the Sensation and EVO 3D which is why it has a performance advantage. If it were clocked at 1.5 GHz, then the MSM8x60 would probably have the same type of performance, if not better.
Anand Tech said:
From a CPU standpoint, Apple has a performance advantage at the same clock speed, but Qualcomm runs its cores at a higher clock. NVIDIA claimed that the move to an out-of-order architecture in the A9 was good for a 20% increase in IPC. Qualcomm has a 20% clock speed advantage. In most situations I think it's safe to say that the A5 [ARM Cortex-A9] and the APQ8060 [dual-core Scorpion] have equally performing CPUs.
Click to expand...
Click to collapse
Hope that helped.
thank u so much to all of u guys out here...!! nd from wat ive studies...as u all said the msm8660 in sensation nd evo 3d uses only 1 core and the 2nd strts oly wen needed thanks to the asynchronous architecture of the 8660..!! and the cpu nd gpu of sg2 is jus "ordinary" as compared to 8660 though it has a9 architecture...and 8660 isnt a a9 nor an a8..its different..it may resemble a8 but its different..i has the advantages of both a8 nd a9....check the anandtech review of 8660 along with the review of exynos side by side then u will know what really has the power...the latest optimus 3d's chipset matches to that of qualcomm with a great score in GLegypt.. thanks alot for all of u guys..
I really feel the benchmarks are pointless as they never relate to real world use. Who here could possibly use the CPU like the benchmark can?
After the 2.3.4 update, my phone is faster. The web browsing is twice as fast for me!
I eagerly await CM7 but really dont think it can improve my phone that much more. I am stock, rooted, S-OFF and use ADW EX along with Sense 3.0 FROZEN solid. My battery life (chichitec) is more than I need with moderate use. I like to charge my phone at night while I sleep, so if it lasts me until I am done for the day...GREAT!
At this point, when I look at the SGS2, I feel my phone works just as well but looks 100 times better!
I do feel that the Sensation CPU will wind up outperforming the SGS2 once it is used like it should be...
Matt
It doesnt matter if msm8260 is better than exynos or adreno 220 is better than mali 400, the only thing that matters is real life performance and Samsung optimized their devices better than htc. If only htc take the time to better optimized their drivers and such it would be a faster and better device than samsung. HTC already wins hands down when it comes to design and choice of materials,too bad they are too lazy to optimize for a more superior experience.
brusko1972 I agree with your point that Samsung hardware and software collaboration that makes Samsung efficient than HTC. HTC need to work on hardware and software match to improve their performance specially with power efficiency and distribution. HTC Sense UI is quite heavy and more power and hardware consuming where as Touchwiz is very light UI. That may be also the reasons for low benchmarking results.
Who has been excited by the Tegra 4 rumours?, last night's Nvidia CES announcement was good, but what we really want are cold-hard BENCHMARKS.
I found an interesting mention of Tegra T114 SoC on a Linux Kernel site, which I've never heard of. I got really interested when it stated that the SoC is based on ARM A15 MP, it must be Tegra 4. I checked the background of the person who posted the kernel patch, he is a senior Nvidia Kernel engineer based in Finland.
https://lkml.org/lkml/2012/12/20/99
"This patchset adds initial support for the NVIDIA's new Tegra 114
SoC (T114) based on the ARM Cortex-A15 MP. It has the minimal support
to allow the kernel to boot up into shell console. This can be used as
a basis for adding other device drivers for this SoC. Currently there
are 2 evaluation boards available, "Dalmore" and "Pluto"."
On the off chance I decided to search www.glbenchmark.com for the 2 board names, Dalmore (a tasty whisky!) and Pluto (Planet, Greek God and cartoon dog!) Pluto returned nothing, but Dalmore returned a device called 'Dalmore Dalmore' that was posted on 3rd January 2013. However the OP had already deleted them, but thanks to Google Cache I found the results
RESULTS
GL_VENDOR NVIDIA Corporation
GL_VERSION OpenGL ES 2.0 17.01235
GL_RENDERER NVIDIA Tegra
From System spec, It runs Android 4.2.1, a Min frequency of 51 MHz and Max of 1836 Mhz
Nvidia DALMORE
GLBenchmark 2.5 Egypt HD C24Z16 - Offscreen (1080p) : 32.6 fps
iPad 4
GLBenchmark 2.5 Egypt HD C24Z16 - Offscreen (1080p): 49.6 fps
CONCLUSION
Anandtech has posted that Tegra 4 doesn't use unified shaders, so it's not based on Kepler. I reckon that if Nvidia had a brand new GPU they would have shouted about it at CES, the results I've found indicate that Tegra 4 is between 1 to 3 times faster than Tegra 3.
BUT, this is not 100% guaranteed to be a Tegra 4 system, but the evidence is strong that it is a T4 development board. If this is correct, we have to figure that it is running beta drivers, Nexus 10 is ~ 10% faster than the Arndale dev board with the same Exynos 5250 SoC. Even if Tegra 4 gets better drivers, it seems like the SGX 544 MP4 in the A6X is still the faster GPU, with Tegra 4 and Mali T604 being an almost equal 2nd. Nvidia has said that T4 is faster than A6X, but the devil is in the detail, in CPU benchmarks I can see that being true, but not for graphics.
UPDATE - Just to add to the feeling that that this legit, the GLBenchmark - System section lists the "android.os.Build.USER" as buildbrain. Buildbrain according to a Nvidia job posting is "Buildbrain is a mission-critical, multi-tier distributed computing system that performs mobile builds and automated tests each day, enabling NVIDIA's high performance development teams across the globe to develop and deliver NVIDIA's mobile product line"
http://jobsearch.naukri.com/job-lis...INEER-Nvidia-Corporation--2-to-4-130812500024
I posted the webcache links to GLBenchmark pages below, if they disappear from cache, I've saved a copy of the webpages, which I can upload, Enjoy
GL BENCHMARK - High Level
http://webcache.googleusercontent.c...p?D=Dalmore+Dalmore+&cd=1&hl=en&ct=clnk&gl=uk
GL BENCHMARK - Low Level
http://webcache.googleusercontent.c...e&testgroup=lowlevel&cd=1&hl=en&ct=clnk&gl=uk
GL BENCHMARK - GL CONFIG
http://webcache.googleusercontent.c...Dalmore&testgroup=gl&cd=1&hl=en&ct=clnk&gl=uk
GL BENCHMARK - EGL CONFIG
http://webcache.googleusercontent.c...almore&testgroup=egl&cd=1&hl=en&ct=clnk&gl=uk
GL BENCHMARK - SYSTEM
http://webcache.googleusercontent.c...ore&testgroup=system&cd=1&hl=en&ct=clnk&gl=uk
OFFSCREEN RESULTS
http://webcache.googleusercontent.c...enchmark.com+dalmore&cd=4&hl=en&ct=clnk&gl=uk
http://www.anandtech.com/show/6550/...00-5th-core-is-a15-28nm-hpm-ue-category-3-lte
Is there any Gpu that could outperform iPad4 before iPad5 comes out? adreno 320, t Mali 604 now tegra 4 aren't near it. Qualcomm won't release anything till q4 I guess, and tegra 4 has released too only thing that is left is I guess is t Mali 658 coming with exynos 5450 (doubtfully when it would release, not sure it will be better )
Looks like apple will hold the crown in future too .
i9100g user said:
Is there any Gpu that could outperform iPad4 before iPad5 comes out? adreno 320, t Mali 604 now tegra 4 aren't near it. Qualcomm won't release anything till q4 I guess, and tegra 4 has released too only thing that is left is I guess is t Mali 658 coming with exynos 5450 (doubtfully when it would release, not sure it will be better )
Looks like apple will hold the crown in future too .
Click to expand...
Click to collapse
There was a great article on Anandtech that tested the power consumption of the Nexus 10's Exynos 5250 SoC, it showed that both the CPU and GPU had a TDP of 4W, making a theoretical SoC TDP of 8W. However when the GPU was being stressed by running a game, they ran a CPU benchmark in the background, the SoC quickly went up to 8W, but the CPU was quickly throttled from 1.7 GHz to just 800 Mhz as the system tried to keep everything at 4W or below, this explained why the Nexus 10 didn't benchmark as well as we wished.
Back to the 5450 which should beat the A6X, trouble is it has double the CPU & GPU cores of the 5250 and is clocked higher, even on a more advanced 28nm process, which will lower power consumption I feel that system will often be throttled because of power and maybe heat concerns, so it looks amazing on paper but may disappoint in reality, and a 5450 in smartphone is going to suffer even more.
So why does Apple have an advantage?, well basically money, for a start mapple fans will pay more for their devices, so they afford to design a big SoC and big batteries that may not be profitable to other companies. Tegra 4 is listed as a 80mm2 chip, iPhone 5 is 96mm2 and A6X is 123mm2, Apple can pack more transistors and reap the GPU performance lead, also they chosen graphics supplier Imagination Technologies have excellent products, Power VR Rogue will only increase Apple's GPU lead. They now have their own chip design team, the benefit for them has been their Swift core is almost as powerful as ARM A15, but seems less power hungry, anyway Apple seems to be happy running slower CPUs compared to Android. Until an Android or WP8 or somebody can achieve Apple's margins they will be able to 'buy' their way to GPU domination, as an Android fan it makes me sad:crying:
32fps is no go...lets hope it's not final
hamdir said:
32fps is no go...lets hope it's not final
Click to expand...
Click to collapse
It needs to, but it will be OK for a new Nexus 7
still faster enough for me, I dont game alot on my nexus 7.
I know I'm taking about phones here ... But the iPhone 5 GPU and adreno 320 are very closely matched
Sent from my Nexus 4 using Tapatalk 2
italia0101 said:
I know I'm taking about phones here ... But the iPhone 5 GPU and adreno 320 are very closely matched
Sent from my Nexus 4 using Tapatalk 2
Click to expand...
Click to collapse
From what I remember the iPhone 5 and the new iPad wiped the floor with Nexus 4 and 10. The ST-Ericsson Nova A9600 is likely to have a PowerVR Rogue GPU. Just can't wait!!
adityak28 said:
From what I remember the iPhone 5 and the new iPad wiped the floor with Nexus 4 and 10. The ST-Ericsson Nova A9600 is likely to have a PowerVR Rogue GPU. Just can't wait!!
Click to expand...
Click to collapse
That isn't true , check glbenchmark , in the off screen test the iPhone scored 91 , the nexus 4 scored 88 ... That ksnt wiping my floors
Sent from my Nexus 10 using Tapatalk HD
Its interesting how even though nvidia chips arent the best we still get the best game graphics because of superior optimization through tegra zone. Not even the a6x is as fully optimized.
Sent from my SAMSUNG-SGH-I727 using xda premium
ian1 said:
Its interesting how even though nvidia chips arent the best we still get the best game graphics because of superior optimization through tegra zone. Not even the a6x is as fully optimized.
Sent from my SAMSUNG-SGH-I727 using xda premium
Click to expand...
Click to collapse
What sort of 'optimisation' do you mean? un optimised games lag that's a big letdown and tegra effects can also be used on other phones too with chain fire 3d I use it and tegra games work without lag with effects and I don't have a tegra device
With a tegra device I am restricted to optimised games mostly
The graphic performance of NVIDIA SoCs is always disappointed, sadly for the VGA dominanting provider on the world.
The first Tegra2, the GPU is a little bit better than SGX540 of GalaxyS a little bit in benchmark, but lacking NEON support.
The second one Tegra 3, the GPU is nearly the same as the old Mali400Mp4 in GALAXY S2/Original Note.
And now it's better but still nothing special and outperformed soon (Adreno 330 and next-gen Mali)
Strongest PowerVR GPUs are always the best, but sadly they are exclusive for Apple only (SGX543 and maybe SGX 554 also, only Sony ,who has the cross-licencing with Apple, has it in PS Vita and in PS Vita only)
tegra optimization porting no longer works using chainfire, this is now a myth
did u manage to try shadowgun thd, zombie driver or horn? the answer is no, games that use t3 sdk for physx and other cpu graphics works can not be forced to work on other devices, equally chainfire is now outdated and no longer updated
now about power vr they are only better in real multicore configuration which is only used by apple and Sony's vita, eating large die area, ie actual multicore each with its own subcores/shaders, if tegra was used in real multi core it would destroy all
finally this is really funny all this doom n gloom because of an early discarded development board benchmark, I dont mean to take away from turbo's thunder and his find but truly its ridiculous the amount of negativity its is collecting before any type of final device benchs
adrena 220 doubled in performance after the ICS update on sensation
t3 doubled the speed of t2 gpu with only 50% the number of shaders so how on earth do you believe only 2x the t3 scores with 600% more shaders!!
do you have any idea how miserable the ps3 performed in its early days? even new desktop GeForces perform much less than expected until the drivers are updated
enough with the FUD! seems this board is full of it nowadays and so little reasoning...
For goodness sake, this isn't final hardware, anything could change. Hung2900 knows nothing, what he stated isn't true. Samsung has licensed PowerVR, it isn't just stuck to Apple, just that Samsung prefers using ARMs GPU solution. Another thing I dislike is how everyone is comparing a GPU in the iPad 4 (SGX554MP4) that will NEVER arrive in a phone compared a Tegra 4 which will arrive in a phone. If you check OP link the benchmark was posted on the 3rd of January with different results (18fps then 33fps), so there is a chance it'll rival the iPad 4. I love Tegra as Nvidia is pushing developers to make more better games for Android compared to the 'geeks' *cough* who prefers benchmark results, whats the point of having a powerful GPU if the OEM isn't pushing developers to create enhance effect games for there chip.
Hamdir is correct about the GPUs, if Tegra 3 was around 50-80% faster than Tegra 2 with just 4 more cores, I can't really imagine it only being 2x faster than Tegra 3. Plus its a 28nm (at around 80mm2 just a bit bigger than Tegra 3, smaller than A6 90mm2) along with the dual memory than single on Tegra 2/3.
Turbotab said:
There was a great article on Anandtech that tested the power consumption of the Nexus 10's Exynos 5250 SoC, it showed that both the CPU and GPU had a TDP of 4W, making a theoretical SoC TDP of 8W. However when the GPU was being stressed by running a game, they ran a CPU benchmark in the background, the SoC quickly went up to 8W, but the CPU was quickly throttled from 1.7 GHz to just 800 Mhz as the system tried to keep everything at 4W or below, this explained why the Nexus 10 didn't benchmark as well as we wished.
Back to the 5450 which should beat the A6X, trouble is it has double the CPU & GPU cores of the 5250 and is clocked higher, even on a more advanced 28nm process, which will lower power consumption I feel that system will often be throttled because of power and maybe heat concerns, so it looks amazing on paper but may disappoint in reality, and a 5450 in smartphone is going to suffer even more.
So why does Apple have an advantage?, well basically money, for a start iSheep will pay more for their devices, so they afford to design a big SoC and big batteries that may not be profitable to other companies. Tegra 4 is listed as a 80mm2 chip, iPhone 5 is 96mm2 and A6X is 123mm2, Apple can pack more transistors and reap the GPU performance lead, also they chosen graphics supplier Imagination Technologies have excellent products, Power VR Rogue will only increase Apple's GPU lead. They now have their own chip design team, the benefit for them has been their Swift core is almost as powerful as ARM A15, but seems less power hungry, anyway Apple seems to be happy running slower CPUs compared to Android. Until an Android or WP8 or somebody can achieve Apple's margins they will be able to 'buy' their way to GPU domination, as an Android fan it makes me sad:crying:
Click to expand...
Click to collapse
Well said mate!
I can understand what you feel, nowdays android players like samsung,nvidia are focusing more on CPU than GPU.
If they won't stop soon and continued to use this strategy they will fail.
GPU will become bottleneck and you will not be able use the cpu at its full potential. (Atleast when gaming)
i have Galaxy S2 exynos 4 1.2Ghz and 400mhz oc mali gpu
In my analysis most modern games like MC4,NFS:MW aren't running at 60FPS at all thats because GPU always have 100% workload and CPU is relaxing there by outputing 50-70% of total CPU workload
I know some games aren't optimize for all android devices as opposed to apple devices but still even high-end android devices has slower gpu (than ipad 4 atleast )
AFAIK, Galaxy SIV is likely to pack T-604 with some tweaks instead of mighty T-658 which is still slower than iPAddle 4
Turbotab said:
There was a great article on Anandtech that tested the power consumption of the Nexus 10's Exynos 5250 SoC, it showed that both the CPU and GPU had a TDP of 4W, making a theoretical SoC TDP of 8W. However when the GPU was being stressed by running a game, they ran a CPU benchmark in the background, the SoC quickly went up to 8W, but the CPU was quickly throttled from 1.7 GHz to just 800 Mhz as the system tried to keep everything at 4W or below, this explained why the Nexus 10 didn't benchmark as well as we wished.
Back to the 5450 which should beat the A6X, trouble is it has double the CPU & GPU cores of the 5250 and is clocked higher, even on a more advanced 28nm process, which will lower power consumption I feel that system will often be throttled because of power and maybe heat concerns, so it looks amazing on paper but may disappoint in reality, and a 5450 in smartphone is going to suffer even more.
So why does Apple have an advantage?, well basically money, for a start iSheep will pay more for their devices, so they afford to design a big SoC and big batteries that may not be profitable to other companies. Tegra 4 is listed as a 80mm2 chip, iPhone 5 is 96mm2 and A6X is 123mm2, Apple can pack more transistors and reap the GPU performance lead, also they chosen graphics supplier Imagination Technologies have excellent products, Power VR Rogue will only increase Apple's GPU lead. They now have their own chip design team, the benefit for them has been their Swift core is almost as powerful as ARM A15, but seems less power hungry, anyway Apple seems to be happy running slower CPUs compared to Android. Until an Android or WP8 or somebody can achieve Apple's margins they will be able to 'buy' their way to GPU domination, as an Android fan it makes me sad:crying:
Click to expand...
Click to collapse
Typical "isheep" reference, unnecessary.
Why does apple have the advantage? Maybe because there semiconductor team is talented and can tie the A6X+PowerVR GPU efficiently. NIVIDA should have focused more on GPU in my opinion as the CPU was already good enough. With these tablets pushing excess of 250+ppi the graphics processor will play a huge role. They put 72 cores in there processor. Excellent. Will the chip ever be optimized to full potential? No. So again they demonstrated a product that sounds good on paper but real world performance might be a different story.
MrPhilo said:
For goodness sake, this isn't final hardware, anything could change. Hung2900 knows nothing, what he stated isn't true. Samsung has licensed PowerVR, it isn't just stuck to Apple, just that Samsung prefers using ARMs GPU solution. Another thing I dislike is how everyone is comparing a GPU in the iPad 4 (SGX554MP4) that will NEVER arrive in a phone compared a Tegra 4 which will arrive in a phone. If you check OP link the benchmark was posted on the 3rd of January with different results (18fps then 33fps), so there is a chance it'll rival the iPad 4. I love Tegra as Nvidia is pushing developers to make more better games for Android compared to the 'geeks' *cough* who prefers benchmark results, whats the point of having a powerful GPU if the OEM isn't pushing developers to create enhance effect games for there chip.
Hamdir is correct about the GPUs, if Tegra 3 was around 50-80% faster than Tegra 2 with just 4 more cores, I can't really imagine it only being 2x faster than Tegra 3. Plus its a 28nm (at around 80mm2 just a bit bigger than Tegra 3, smaller than A6 90mm2) along with the dual memory than single on Tegra 2/3.
Click to expand...
Click to collapse
Firstly please keep it civil, don't go around saying that people know nothing, people's posts always speak volumes. Also calling people geeks, on XDA is that even an insult, next you're be asking what I deadlift:laugh:
My OP was done in the spirit of technical curiosity, and to counter the typical unrealistic expectations of a new product on mainstream sites, e.g. Nvidia will use Kepler tech (which was false), omg Kepler is like GTX 680, Tegra 4 will own the world, people forget that we are still talking about device that can only use a few watts, and must be passively cooled and not a 200+ watt, dual-fan GPU, even though they both now have to power similar resolutions, which is mental.
I both agree and disagree with your view on Nvidia's developer relationship, THD games do look nice, I compared Infinity Blade 2 on iOS vs Dead Trigger 2 on youtube, and Dead Trigger 2 just looked richer, more particle & physics effects, although IF Blade looked sharper at iPad 4 native resolution, one of the few titles to use the A6x's GPU fully.The downside to this relationship is the further fragmentation of the Android ecosystem, as Chainfire's app showed most of the extra effects can run on non Tegra devices.
Now, a 6 times increase in shader, does not automatically mean that games / benchmarks will scale in linear fashion, as other factors such as TMU /ROP throughput can bottleneck performance. Nvidia's Technical Marketing Manager, when interviewed at CES, said that the overall improvement in games / benchmarks will be around 3 to 4 times T3. Ultimately I hope to see Tegra 4 in a new Nexus 7, and if these benchmarks are proved accurate, it wouldn't stop me buying. Overall including the CPU, it would be a massive upgrade over the current N7, all in the space of a year.
At 50 seconds onwards.
https://www.youtube.com/watch?v=iC7A5AmTPi0
iOSecure said:
Typical "isheep" reference, unnecessary.
Why does apple have the advantage? Maybe because there semiconductor team is talented and can tie the A6X+PowerVR GPU efficiently. NIVIDA should have focused more on GPU in my opinion as the CPU was already good enough. With these tablets pushing excess of 250+ppi the graphics processor will play a huge role. They put 72 cores in there processor. Excellent. Will the chip ever be optimized to full potential? No. So again they demonstrated a product that sounds good on paper but real world performance might be a different story.
Click to expand...
Click to collapse
Sorry Steve, this is an Android forum, or where you too busy buffing the scratches out of your iPhone 5 to notice? I have full respect for the talents of Apple's engineers & marketing department, many of its users less so.
hamdir said:
tegra optimization porting no longer works using chainfire, this is now a myth
did u manage to try shadowgun thd, zombie driver or horn? the answer is no, games that use t3 sdk for physx and other cpu graphics works can not be forced to work on other devices, equally chainfire is now outdated and no longer updated
Click to expand...
Click to collapse
Looks like they haven't updated chain fire 3d for a while as a result only t3 games don't work but others do work rip tide gp, dead trigger etc . It's not a myth but it is outdated and only works with ics and tegra 2 compatible games . I think I (might be) unfortunate too but some gameloft games lagged on tegra device that i had, though root solved it too an extent
I am not saying something is superior to something just that my personal experience I might be wrong I may not be
Tbh I think benchmarks don't matter much unless you see some difference in real world usage and I had that problem with tegra in my experience
But we will have to see if the final version is able to push it above Mali t 604 and more importantly sgx544
Turbotab said:
Sorry Steve, this is an Android forum, or where you too busy buffing the scratches out of your iPhone 5 to notice? I have full respect for the talents of Apple's engineers & marketing department, many of its users less so.
Click to expand...
Click to collapse
No I actually own a nexus 4 and ipad mini so I'm pretty neutral in googles/apples ecosystems and not wiping any scratches off my devices.
As there are much newer technology,some of them are enough for us or even overpowered.But the main problem is,Do we really need more than 4 cores?
Yes,some of them are about marketing,but i promise it won't be to much. From i know people around me thinks more cores means a better,faster phone...
There,Let's discuss why quad cores are better(if you stand at octa it's fine,just a little technical discuss),And why market prefers.
So we use a more simple way to explain why octa core is not better than a quad core phone.
"Uses a right,lower power processor to do some simple jobs"
No go away please.If 2/4 cores could handle flawlessly,why do we need those spare cores for?
Energy Consumption
Yes,refers to above,useless cores wastes more energy and your battery will drain faster if system/cpu governor is not totally optimized.
Heat problem
This happens not only on Snapdragon 810,Also Exynos 5420...etc
Well,it's nothing but a feeling of keeping a furnace in your pocket.
Application Design
Actually,not many apps could take all 8 cores,and because based on big.LITTLE,those 8 cores can't be running at the same time.
But,It still have advantages:
Maybe,Benchmarking.
Yeah probably you will get a higher mark on core benchmarking,but so what?Experiences is on your own,benchmarking means nothing.
Using android as a workstation
Yeah that Maybe helps if you are using adobe clip editing tool on an android phone...
And Marking side makes more complicated.
Most of users which doesn't have/or having very little tech skills will just prefer an octa core phone because they will think the performance on a 8 core chipset is doubled.
And it may sounds cool,but there's too much drawbacks.
So,Conclusion:
Since nowadays phone are having too much spare power,and "fast"includes a lot of other parts in the chipset or phone like GPU,RAM,EMMC..etc
NONONO....We doesn't need more core,we need a BETTER core.
Think an apple i6 .Although i hate it,but there are only 2 cores and they performed pretty well.
And last,optimization is important in the first place,because if you even have 1000 cores,fail optimization makes it useless.
Reserved
The problem is that application development in order to use the extra cores is difficult. Multithreading in applications increases complexity a great deal, introduces hard-to-reproduce bugs and worst of all - trying to use more cores may actually make the app slower.
While multithreaded applications might be able to get a boost with extra cores, I think the real benefit is better handling of multiple tasks (such as playing music and running navigation, with Bluetooth audio). I'm not sure that having more than four offers all that much benefit, though. I've certainly found quad-core phones to be more responsive compared to the dual-core models I've used.
Bobby Tables said:
While multithreaded applications might be able to get a boost with extra cores, I think the real benefit is better handling of multiple tasks (such as playing music and running navigation, with Bluetooth audio). I'm not sure that having more than four offers all that much benefit, though. I've certainly found quad-core phones to be more responsive compared to the dual-core models I've used.
Click to expand...
Click to collapse
Based on big.LITTLE,You could just split works to 2 different designed cpu,one is high density and one is more "Energy saving"(Actually it was just 4+4)
And for your example,i have found out that Bluetooth audio(Because communicating with more components on the phone,so it is considered as a high density work)
Also same as navigating.It is hard to program these applications to use the lower power cores.
Hello 1+1 owners,
I just wanted to ask whether does GPU Overclocking on your OnePlus devices improve graphics performance. I've seen some kernels which supports GPU OC, so I'm asking those who already tried it.
I'm asking this because I've added GPU OC to my custom kernel for Lenovo Vibe Z2 Pro (which has the same Snapdragon 801), and although the GPU itself goes to 657MHz frequency step, I haven't noticed any improvements whatsoever, either in GPU Benchmarks (3DMark, GFXBench) or in android games.
Electry said:
Hello 1+1 owners,
I just wanted to ask whether does GPU Overclocking on your OnePlus devices improve graphics performance. I've seen some kernels which supports GPU OC, so I'm asking those who already tried it.
I'm asking this because I've added GPU OC to my custom kernel for Lenovo Vibe Z2 Pro (which has the same Snapdragon 801), and although the GPU itself goes to 657MHz frequency step, I haven't noticed any improvements whatsoever, either in GPU Benchmarks (3DMark, GFXBench) or in android games.
Click to expand...
Click to collapse
That's because GPU overclocking isn't possible on this device, or any non A-family device (our chipset is B-family, and almost all Qualcomm devices released in the last 2 years are B-family). The GPU clock table is stored in TrustZone, so we can't touch it.
Sent from my A0001 using XDA Free mobile app
I wonder why they did this. Are they pushing people to get and pay for devices with their newer soc's? That's bad news, really.
I remember modifying the gpu freq table on my Nexus 7 (T3) which helped to squeeze some extra power from the device.
Anyway, thanks @Sultanxda for explanation.
Electry said:
I wonder why they did this. First thing that came to my mind was the idea that they are pushing people to get and pay for devices with their newer soc's. That's bad news, really.
I remember modifying the gpu freq table on my Nexus 7 (T3) which helped to squeeze some extra power from the device.
Anyway, thanks @Sultanxda for explanation.
Click to expand...
Click to collapse
I have a Nexus 7 2012 and all I can say is that Tegra is nowhere near the same level as Snapdragon. The T3 is super laggy, whereas even Snapdragon chips from 3 years ago are still running smoothly today. Our GPU and all of our hardware in general should be the least of your worries; the Snapdragon 801 is super overpowered, and the GPU on this thing won't be a cause for bottlenecks while gaming for probably another 2 years. 578MHz is more than plenty right now.
Sultanxda said:
I have a Nexus 7 2012 and all I can say is that Tegra is nowhere near the same level as Snapdragon. The T3 is super laggy, whereas even Snapdragon chips from 3 years ago are still running smoothly today. Our GPU and all of our hardware in general should be the least of your worries; the Snapdragon 801 is super overpowered, and the GPU on this thing won't be a cause for bottlenecks while gaming for probably another 2 years. 578MHz is more than plenty right now.
Click to expand...
Click to collapse
True about T3, probably nvidia's biggest dissapointment.
Although Adreno 330 is already struggling with some more demanding games at 1440p (which is the resolution my phone has), GPU usage constantly hits 100% where at 1080p it only hit 60-70% (measured with GameBench). I was afraid that I will have to switch permanently to 1080p soon, just to maintain playable framerates (25+).
Electry said:
True about T3, probably nvidia's biggest dissapointment.
Although Adreno 330 is already struggling with some more demanding games at 1440p (which is the resolution my phone has), GPU usage constantly hits 100% where at 1080p it only hit 60-70% (measured with GameBench). I was afraid that I will have to switch permanently to 1080p soon, just to maintain playable framerates (25+).
Click to expand...
Click to collapse
You can try changing the GPU governor to performance. Open a terminal emulator app and run these two commands:
Code:
su
echo performance > /sys/class/devfreq/f*/governor