Quarant Request - G Tablet General

Can someone post a detailed output of their quadrant score? I want to compare it to a Arcos tablet... The Arcos is coming real close at this point with some new development going on and want to see where the Tegra is pulling ahead and why Arcos scores have improved so much.
Thanks

stanglx said:
Can someone post a detailed output of their quadrant score? I want to compare it to a Arcos tablet... The Arcos is coming real close at this point with some new development going on and want to see where the Tegra is pulling ahead and why Arcos scores have improved so much.
Thanks
Click to expand...
Click to collapse
Vegan 5.1, clemsyn kernel 10.9.7 (CIFS,NTFS,TUN)
on battery, score 2243
Please note that Quadrant is a single-thread benchmark that doesn't take advantage of the dual-core tegra processor. It may be OK to use its score to compare performance for the current/last generation of software but offers no real intelligence about the next generation of applications and games on this platform. {my opinion only, your mileage may vary}. Also, the Quadrant score does not seem to be related to overall impressions about UI look and feel (smoothness, responsiveness).

Can you provide the breakdown... I/O, Video, Etc.
bittrix said:
Vegan 5.1, clemsyn kernel 10.9.7 (CIFS,NTFS,TUN)
on battery, score 2243
Please note that Quadrant is a single-thread benchmark that doesn't take advantage of the dual-core tegra processor. It may be OK to use its score to compare performance for the current/last generation of software but offers no real intelligence about the next generation of applications and games on this platform. {my opinion only, your mileage may vary}. Also, the Quadrant score does not seem to be related to overall impressions about UI look and feel (smoothness, responsiveness).
Click to expand...
Click to collapse

I'm running Vegan B5.1
With the default Kernel, I was getting Quadrant scores in the range of 2430-2468
With Clemsyn's beta Kernel, I was getting slightly lower scores -- between 2345-2376 -- but you really can't see any difference in the real world usage...and it has added features...
Sorry, can't post more details about the Quadrant breakdown as I only have teh Standard version...

stanglx said:
Can you provide the breakdown... I/O, Video, Etc.
Click to expand...
Click to collapse
No. I only have the standard version.

VEGAn 5.1b w/Clemsyn's kernel.
Total: 2273
CPU: 6997
Memory: 2586
I/O: 812
2D: 210
3D: 759

see my sig.

I am interested in the Archos scores, what do you mean their scores are improving? And, just how close are they to the g-tablet?

tcrews said:
VEGAn 5.1b w/Clemsyn's kernel.
Total: 2273
CPU: 6997
Memory: 2586
I/O: 812
2D: 210
3D: 759
Click to expand...
Click to collapse
Our 2D score sucks some serious wang, huh. Still a lot of work that needs to be done on these drivers. NVidia, are you listening?
This CPU is fricking blazing though.

Yes... they have been able to enable ext4 file system plus overclock the processor
Its yielding some impressive quadrant scores... Im inching towards the 70 as the screen is pretty good and I like the smaller form factor to boot... Unfortunately havnt found anyone who purchased quadrant to give me the details... just so far the main number - which doesnt say much.
http://forum.archosfans.com/viewtopic.php?f=76&t=45422
http://forum.xda-developers.com/showpost.php?p=10329408&postcount=26
muerteman said:
I am interested in the Archos scores, what do you mean their scores are improving? And, just how close are they to the g-tablet?
Click to expand...
Click to collapse

stanglx said:
Yes... they have been able to enable ext4 file system plus overclock the processor
Its yielding some impressive quadrant scores... Im inching towards the 70 as the screen is pretty good and I like the smaller form factor to boot... Unfortunately havnt found anyone who purchased quadrant to give me the details... just so far the main number - which doesnt say much.
http://forum.archosfans.com/viewtopic.php?f=76&t=45422
http://forum.xda-developers.com/showpost.php?p=10329408&postcount=26
Click to expand...
Click to collapse
Heh, I've enabled the ext4 file system on the G Tablet kernel. It will increase our I/O performance too, as soon as I figure out why I get a reboot loop when I change the boot ramdisk to mount /data as ext4.
I've also been looking into changing the max clockspeed. Should be straightforward to change it, just a question of whether it breaks anything else.
Being married now, I don't quite have the free time I used to to hack on my gadgets.

Overall: 2435
CPU: 7601
Memory: 2617
I/O: 998
2d: 217
3d: 740
Sent from my VEGAn-TAB-v1.0.0B5.1 using Tapatalk

One of the guys from the Archos forum sent me their status on with 1.1 Ghz OC and ex4 file system. As you can see with ext4 I/O goes through the roof and why the quadrant is so high on the Archos.
:
Total: 2090
CPU: 4652
MEM: 1218
I/O: 4015
2D: 218
3D: 347

I'm more interested in why our 2D scores are so low? More than twice the 3D performance but slightly less 2D?

Overall 2507
CPU 7901
mem 2664
io 1030
2d 206
3d 733
Vegan b5.1

whosloosin92 said:
I'm more interested in why our 2D scores are so low? More than twice the 3D performance but slightly less 2D?
Click to expand...
Click to collapse
I'm guessing the 2D benchmark isn't stressing either device and something else is limiting the score (EDIT: also higher res if you are comparing to phones). IMO Quadrant scores are easily manipulated by file system tweaks and in no way represent real world performance.
More important in my opinion as a gamer: Dungeon Defenders runs perfectly on the gtab while it's almost impossible to get the game to even run on the archos due to it only having 256MB of RAM.

Though dont fret guys about the 2D... BUT I/O definitely can improve if you move to the ext4 file system... If you venture over to the Archos forum they have done a great deal and most of not only enabling in the kernel but also some simple unix hacks to get the new partitions to be recognized. Keep in mind though they are using external microsd cards class 6 and 10 which seem to be faster than the internal storage
Honestly I have not pulled the trigger on either device yet.. Im looking at what ever I get as a short term solution (maybe till end of year or 1st quarter next year)... I love the fact the GTab is so quick and also love the support in this forum... Im also starting to become comfortable with the GIT structures, compiling the source,etc. Im just very very afraid of the screen.. I keep seeing the question "is it that bad" and replies stem from not that bad but all the way to "im getting rid of it as I cant take it anymore".
Now the Archos unit no one really complains about the screen, I also like the 70 model (smaller) which every says the screen is fine... BUT it does have the low memory issue (not really an issue but could be in the near future if 512 becomes a staple).. With the recent custom ROMs they have been able to get their quadrants up to over 2k BUT this is due to the I/O.. Its also >$100 less.. .Still pondering.. Maybe by the time I make up my mind something will excite me from the Honeycomb bunch of devices.. but honestly... I dont like paying to be an adopter - I feel the GTab and Archos units are priced right based on their maturity.. the Honeycomb devices will surely not be due to its supposedly competitive nature to the iPad. Anyway.. I digress...

Related

Samsung Captivate -- Wrong Processor?

I hope I am wrong or got the wrong phone or maybe the phone just dynamically clocks itself down to 800 mhz instead of the advertised 1000mhz. When I use Quandrant Standard Edition my Captivate is listed at 800 mhz with a max of 1000mhz. But the cpu is ARMv7 Processor rev 2? Is that right? BogoMIPS is 797.9, and hardware is SGH-I897? is is not supposed to be SGS? Please let me know what you all are getting and what Quadrant Score...my friends Droid X destroys my quadrant score with ~1200, I get only 867. The benchmark really drags during the I/O sections. Help please and share your specs, cause I am feeling a little disappointed.
Dynamically clocking down.
nevermind...I just saw the other thread with this issue.
THREAD ENDED
If this processor is similar in it's capabilites as the qualcom Snapdragon than the processor has the ability to slow down if the higher clock speed is not needed. My N1 can switch it's frequency between 245Mhz and 1Ghz (or 1.1Ghz when overclocked with an upgraded kernel).
Or the software you are using is not reading the correct speed.
ronpinoy253 said:
I hope I am wrong or got the wrong phone or maybe the phone just dynamically clocks itself down to 800 mhz instead of the advertised 1000mhz. When I use Quandrant Standard Edition my Captivate is listed at 800 mhz with a max of 1000mhz. But the cpu is ARMv7 Processor rev 2? Is that right? BogoMIPS is 797.9, and hardware is SGH-I897? is is not supposed to be SGS? Please let me know what you all are getting and what Quadrant Score...my friends Droid X destroys my quadrant score with ~1200, I get only 867. The benchmark really drags during the I/O sections. Help please and share your specs, cause I am feeling a little disappointed.
Click to expand...
Click to collapse
No the specifications are right .. i too ran the benchmark and was puzzled yesterday. You can find that thread in this forum.
Check this video. it explains why captivates score is less than Droid X .. In fact as per the video even though the score is less captivate is in fact a faster phone than Droid X and i am inclined to agree watching video
http://www.youtube.com/watch?v=zgoQHxy-0mM
----------------
itzz(AN)dRoiD

Dual Core Snapdragon BLOWS NVidia Dual Core tegra 2 OUT OF THE WATER

the new Dual Core Snapdragon makes Nvidia's Tegra 2 look like a single core CPU!
and it's not even out of development yet, so this review is on pre-release hardware (Mobile Development Platform (MDP)) which means it's not even optimized yet!
this is Massively Impressive!
some highlights
Qualcomm Mobile Development Platform (MDP)
SoC 1.5 GHz 45nm MSM8660
CPU Dual Core Snapdragon
GPU Adreno 220
RAM (?) LPDDR2
NAND 8 GB integrated, microSD slot
Cameras 13 MP Rear Facing with Autofocus and LED Flash, Front Facing (? MP)
Display 3.8" WVGA LCD-TFT with Capacitive Touch
Battery 3.3 Whr removable
OS Android 2.3.2 (Gingerbread)
...............................................................................................
the LG 3D
LG Optimus 3D is also a dual core cpu
Dual-core 1GHz ARM Cortex-A9 proccessor, PowerVR SGX540 GPU, TI OMAP4430 chipset
................................................................................................
the LG 2x
LG Optimus 2X is a Dual core cpu
Dual-core 1GHz ARM Cortex-A9 proccessor, ULP GeForce GPU, Tegra 2 chipset
................................................................................................
the Nexus S
Nexus S is a single core cpu
(single core) 1 GHz ARM Cortex-A8 processor, PowerVR SGX540
................................................................................................
GLBenchmark 2.0 Egypt
38 Qualcomm MDP
31 LG 3D
25 LG 2x
21 Nexus S
GLBenchmark 2.0 Pro
94 Qualcomm MDP
55 LG 3D
51 LG 2x
42 Nexus S
Quake 3 FPS (Frames per second)
80 Qualcomm MDP
50 LG 2x
52 Nexus S
N/A LG 3D
Quadrant / 3D / 2D
2851 / 1026 / 329 Qualcomm MDP
2670 / 1196 / 306 LG 2x
1636 / 588 / 309 Nexus S
N/A LG 3D
NOTE: take the Quadrant scores with a grain of Salt
heres what Anand has to say about it
"What all Quadrant is putting emphasis on with its 2D and 3D subtests is something of a mystery to me. There isn't a whole lot of documentation, but again it's become something of a standard. The 1.5 GHz MSM8660 leads in overall score and the 2D subtest, but trails Tegra 2 in the 3D subtest. If you notice the difference between Hummingbird (SGX540) from 2.1 to 2.3, you can see how Quadrant's strange 3D behavior on Android 2.3 seems to continually negatively impact performance. I saw the same odd missing texture and erratic performance back when I tested the Nexus S as I did on the MDP. Things like this and lack of updates are precisely why we need even better testing tools to effectively gauge performance"
Source: Anandtech.com
http://www.anandtech.com/show/4243/...ormance-1-5-ghz-msm8660-adreno-220-benchmarks
Hope u enjoyed this
Ric H. (a1yet)
PS: don't rule out Nvidia yet their dual core may have gotten blown out of the water BUT
will their quad (four) cores CPU AND 12 core Gpu be better ?
NVIDIA's Project Kal-El: Quad-Core A9s Coming to Smartphones/Tablets This Year
Link:
http://www.anandtech.com/show/4181/...re-a9s-coming-to-smartphonestablets-this-year
a1yet said:
PS: don't rule out Nvidia yet their dual core may have gotten blown out of the water BUT
will their 12 core cpu be better ?
Click to expand...
Click to collapse
If you're one of those benchmark nut-riders, at least take some time to understand what it is that you're reading. It's 12-core GPU, big difference from a 12-core CPU, which doesn't even exist on desktop computers yet (unless you're talking about multisocket server-class mobos), let alone on a mobile phone.
And the second point which 99% of the people who tend to lust at the benchmarks don't have a damn clue about, screen size and resolution. But I'm sure you don't care to know much about it, OP.
I don't see the point of benchmarks if they don't tell the real world stories.
not sure about if the information is accurate, however it will be nice to have competition so there is always better cpu coming out.
GREAT cause the ipad is killing tegra 2 already
I think mobile processors are similar to desktop processors. There's just too much going on to accurately benchmark. My OG Droid with a 1.25Ghz overclock doesn't even come close to touching my HTC Thunderbolt on stock, yet technically it's 250Mhz faster, right? The HTC's updated 1Ghz processor is faster than other 1Ghz processors, yet rated at 1Ghz. I don't see logic in all the hype.
lude219 said:
And the second point which 99% of the people who tend to lust at the benchmarks don't have a damn clue about, screen size and resolution. But I'm sure you don't care to know much about it, OP.
Click to expand...
Click to collapse
WELL my "PS:" was added in hast and I Made a typo. My whole post was about "GRAPHICS" performance so the typo did not impact the heart of my post!
sad day for you
because with your 2 brain cells u obviously have NO CLUE what u are talking about. "Screen SIZE" has no bearing on performance ! none, zero, zip, zilch!
talk to me about screen size next time I'm playing Angry Birds on my 52 inch HDTV!
the only thing that has ANY bearing on performance IS "resolution"
so to explain it in a way that u can understand
the only impact screen size has is it sometimes allows you (Depending on how the manufactures implement it) to have a higher ....
WAIT FOR IT ...........
WAIT FOR IT ...........
"Resolution"
WOW SAD Day for you !
Go bash someones post, who can tolerate your Ignorance! and leave mine alone
Sincerely
Ric H. (a1yet)
ngarcesp said:
GREAT cause the ipad is killing tegra 2 already
Click to expand...
Click to collapse
and the ipad 2's processor is made by samsung
Sent from HTC EVO
a1yet said:
WELL my "PS:" was added in hast and I Made a typo. My whole post was about "GRAPHICS" performance so the typo did not impact the heart of my post!
sad day for you
because with your 2 brain cells u obviously have NO CLUE what u are talking about. "Screen SIZE" has no bearing on performance ! none, zero, zip, zilch!
talk to me about screen size next time I'm playing Angry Birds on my 52 inch HDTV!
the only thing that has ANY bearing on performance IS "resolution"
so to explain it in a way that u can understand
the only impact screen size has is it sometimes allows you (Depending on how the manufactures implement it) to have a higher ....
WAIT FOR IT ...........
WAIT FOR IT ...........
"Resolution"
WOW SAD Day for you !
Go bash someones post, who can tolerate your Ignorance! and leave mine alone
Sincerely
Ric H. (a1yet)
Click to expand...
Click to collapse
I like you pal!That's the spirit!
Forget the haters dude,there are many around!
r916 said:
and the ipad 2's processor is made by samsung
Sent from HTC EVO
Click to expand...
Click to collapse
I don't know about it being made by Samsung,but the CPU(the CPU itself,not the whole chip)is larger than the other CPUs,thus having more space for more transistors.That significantly boosts performance.

Electopia Benchmark

For giggles, can one of you that's stock run the Electopia benchmark? There's been some interesting results and it would be cool to see how another dual-core phone with a different CPU/GPU performs. The Sensation folks are obviously not amused.
Sensation
800x480
Average FPS: 23.65
Time: 60
Number of Frames: 1419
Trianglecount: 48976
Peak Trianglecount: 68154
960x540
Average FPS: 19.90
Time: 60.01
Number of Frames: 1194
Trianglecount: 49415
Peak Trianglecount: 67076
SGS2
Average FPS: 37.58
Time: 60.01
Number of frames: 2255
Trianglecount: 48633
Peak trianglecount: 68860
DHD
Average FPS: 23.36
Time: 60.03
Number of frames: 1402
Trianglecount: 48835
Peak trianglecount: 67628
Even the Desire HD blew away my G2x on this benchmark but it could be the custom ROM... I'll switch back to AOSP and try it again.
16FPS
Can't be right, my Thunderbolt smoked my g2x
26 FPS Thunderbolt vs 16FPS G2x
Something is very wrong with those numbers if this is supposed to be measuring opengl 2.0
I have stock and with a really hard time getting it to respond to touch input and with the sound off here are the scores:
Average FPS - 15.56
Time - 60.04
Number of Frames - 934
Trianglecount - 48928
Peak Trianglecount - 68838
This was a super buggy program on the G2x. I think it is definitely not optimized for dual core or at least the Tegra 2 architecture.
Sent from my T-Mobile G2x using XDA App
There's no way the G2X would be lower than the Sensation. The test probably isn't dual-core optimized and the new CPU/GPUs are throwing it off. Thanks for trying though.
BarryH_GEG said:
There's no way the G2X would be lower than the Sensation. The test probably isn't dual-core optimized and the new CPU/GPUs are throwing it off. Thanks for trying though.
Click to expand...
Click to collapse
And there is no way the g2x could be lower than a single core adreno 205 Thunderbolt.
15.57 FPS for me running stock/not rooted. Like previously mentioned, it was very unresponsive to touch.
Badly designed benchmark programs are bad.
diablos991 said:
Badly designed benchmark programs are bad.
Click to expand...
Click to collapse
The sad part is that this isn't just a benchmark - its a game first and foremost.
And yeah I can't get past 16FPS on stock speed OR at 1.5GHz so I think there's definitely coding issues as Nenamark using Trinity on Bionic scores 72FPS. I think my Inspire (Adreno 205) got about 35?
+1
Lets all buy phones with top benchmarks!!!!!!
Better yet lets all get iPhones.....
Fu*k a benchmark
Sent from my LG-P999 using XDA Premium App
But if you really stop and think about it, if each of the different CPU/GPU's behave differently running the same software because of proprietary hardware performance tweaks we'll all be screwed in the long run. No matter how Electopia was written, one would think it would behave the same way on different CPU/GPU combinations - even if it wasn't dual-core optimized. So developers are either going to have to start testing on every CPU/GPU combo to release a single version of an app or release different apps for different CPU/GPUs. It's way too early to tell as dual-core and 2.3ish isn't that common now but it should be interesting watching software performance and development play out in the future.
BarryH_GEG said:
But if you really stop and think about it, if each of the different CPU/GPU's behave differently running the same software because of proprietary hardware performance tweaks we'll all be screwed in the long run. No matter how Electopia was written, one would think it would behave the same way on different CPU/GPU combinations - even if it wasn't dual-core optimized. So developers are either going to have to start testing on every CPU/GPU combo to release a single version of an app or release different apps for different CPU/GPUs. It's way too early to tell as dual-core and 2.3ish isn't that common now but it should be interesting watching software performance and development play out in the future.
Click to expand...
Click to collapse
Its piss-poor coding on the app developer's part - plain and simple. While there are Tegra 2-specific instructions that an app developer can use in their application, there are not any mobile OpenGL 2.0 instructions the Tegra 2 doesn't support as far as I am aware.
If you want a good challenge for the chip, download an3dbench XL from Market. I just scored 32640 and that's with a bunch of background apps.
Isn't this a windows mobile port (had it on my HD2 running WM6.5)? So, how does it provide an accurate representation of gaming on an Android device? Since it is the only bench my G2x has scored poorly on and (more importantly) real world gaming is spectacular on this thing, I'm going to say it doesn't. I wouldn't put a whole lot of stock in this one...
Yeah agreed. I just ran it on the Nexus/CM7 AOSP hybrid and it still was only 16.06 while I got almost 40,000 on an3dbenchXL which put me like 30-something out of 7000ish results.
This application was influenced by Qualcomm specifically to run poorly on Tegra 2 devices. They messed with the shaders so everything is rendered at a weird angle. If you change the code to run with a normal approach, you see the same results on Qualcomm chips but also 3-5x perf on NVIDIA chips
why would you say this benchmark was influenced? if you have the sources ..please share .. so we can all look ... and how can you say BenchXL is a good benchmark? I have run BenchXL Benchmark and seen un matching results on many forums ... it is very unreliable... not a good benchmark. At least electopia gives consistent reliable results... I would go with electopia as a GPU benchmark ..
i have a xperia play for myself - which performs superb for gaming - awesome graphics - i love the games on it - awesome device. my wife has g2x - which is equally good for gaming (thought she just uses it for texting - LOL )....
i think for gaming both xperia play and g2x are good...
I'd hardly say it's biased to any one specific manufacturer based on these benchmarks:
More so I ran it myself with the latest firmware at stock frequencies (SGS2 btw ) and got:
Average FPS: 51.44
Time: 60.02
Number of frames: 3087
Trianglecount: 48827
Peak trianglecount: 68868
Quite funny difference to any other device I might say.
It's not biased towards any manufacturer, it is biased against NVIDIA's ULP GeForce GPUs in Tegra 2 SOCs.
Changes to the code cause increases in performance on Tegra 2 devices, while results on other platforms do not change.
In general, there is never a single, all-encompassing GPU benchmark to accurately compare devices. It all depends on the code, and how it interacts with the specific hardware of the device.
images |DOT| anandtech |DOT| com /graphs/graph4177/35412.png
images |DOT| anandtech |DOT| com /graphs/graph4177/35412.png
Source: Anandtech Samsung Galaxy S2 review (I can't post links )
http://images.anandtech.com/graphs/graph4177/35412.png
That AnandTech review is badly outdated, like I said; the SGS2 gets for example 16fps there in February. I myself get 58fps today.
And I don't think it's biased against Tegra. Tegra performs pretty much there where it should be considering its age, and corresponds to it's specs.
And just to prove dismiss your point that Tegra gets a different codepath, I ran Electopia Bench again via Chainfire3D using the NVIDIA GL wrapper plugin emulating said device and I'm still getting the same amount of FPS.
If what you're saying is that it's not utilizing Tegra's full potential through proprietary Nvidia OpenGL extensions, might as well pack the bag and leave because then that logic would apply to pretty much every graphics core since it's not optimized for it. What we see here in these benchmarks is a plain simple ES 2.0 codepath which all devices should support and so we can do an oranges to oranges comparision. It's also one of the heaviest fragment-shader dependent benchmarks out there for the moment, and less geometry and texture bound, and that's why it runs so badly on pretty much every chip, since they don't get this type of workload in other benchmarks. This is also why the Mali gets such high FPS as that's where the quad GPU setup in the Exynos can shine.
AndreiLux said:
I'd hardly say it's biased to any one specific manufacturer based on these benchmarks:
More so I ran it myself with the latest firmware at stock frequencies (SGS2 btw ) and got:
Average FPS: 51.44
Time: 60.02
Number of frames: 3087
Trianglecount: 48827
Peak trianglecount: 68868
Quite funny difference to any other device I might say.
Click to expand...
Click to collapse
It's clearly MALI 400 in SGS2 is most powerful GPU right now. There is a 60fps limit on Galaxy S2, so you'll need a powerful benchmark. You can also see that in Nenamark2 too. SGS2=47fps, G2X=28fps, SGS=24fps

Samsung Exynos 5250 - Arndale development board.

For all those interested in developing for the Exynos 5250, to be used in the Nexus 10, Samsung have kindly launched, for a modest sum, the Arndale development board.
http://www.arndaleboard.org/wiki/index.php/Main_Page
It has already been benchmarked on the GL Benchmark site, Mali T-604 is powerful, but it doesn't look like it will give the A6X any headaches.
http://www.glbenchmark.com/phonedet...o25&D=Samsung+Arndale+Board&testgroup=overall
Need proper benchmark done on a final device. Definitely can't think that dev board drivers are optimized properly. It's running on 4.0.4. We should get more details once we do a benchmark on a final version of N10.
hot_spare said:
Need proper benchmark done on a final device. Definitely can't think that dev board drivers are optimized properly. It's running on 4.0.4. We should get more details once we do a benchmark on a final version of N10.
Click to expand...
Click to collapse
Jelly Bean didn't do much for graphics benchmarks, IRC. The low-level test won't change much, if you do a comparison with the iPad 3, you can see that Power VR SGX MP4 is a beast in terms of pixel / texture fill rate, which the A6X will improve further. The consensus is that shader power is the most important, as long as there is sufficient fill rate performance, and the Mali T-604 combined with its good bandwidth should be as capable as the A6X in real world games, the only question will developers optimise a game just 1 tablet?
Turbotab said:
Jelly Bean didn't do much for graphics benchmarks, IRC. The low-level test won't change much, if you do a comparison with the iPad 3, you can see that Power VR SGX MP4 is a beast in terms of pixel / texture fill rate, which the A6X will improve further. The consensus is that shader power is the most important, as long as there is sufficient fill rate performance, and the Mali T-604 combined with its good bandwidth should be as capable as the A6X in real world games, the only question will developers optimise a game just 1 tablet?
Click to expand...
Click to collapse
I am not saying that JB will suddenly improve GPU benchmarks, but a lot of improvement can happen due to driver/firmware optimization.
Let me give you real example: Do you recall GLbenchmark Egypt offscreen scores GS2 when it came out initially? It was getting around 40-42fps initially.
[Source: http://www.anandtech.com/show/4686/samsung-galaxy-s-2-international-review-the-best-redefined/17 ]
The same GS2 after a few months was getting 60-65fps under same test.
Source 1: http://www.anandtech.com/show/6022/samsung-galaxy-s-iii-review-att-and-tmobile-usa-variants/4
Source 2: http://www.anandtech.com/show/5811/samsung-galaxy-s-iii-preview
It's a clear 50% improvement in performance done primarily through driver optimization.
Also, check the fill rate properly in the Arndale board test. It's much less than what is expected. ARM says that Mali-T604 clocked at 500MHz should get a fill rate of 2 GPixels/s. It's actually showing just about 60% of what it should be delivering.
http://blogs.arm.com/multimedia/353-of-philosophy-and-when-is-a-pixel-not-a-pixel/
Also check this slide : http://semiaccurate.com/assets/uploads/2012/03/Samsung_Exynos_5_Mali.jpg
Samsung says 2.1 GPixels/s @ GPU clocked at 533MHz. Obviously the results don't match with quoted numbers. Difference is a lot actually.
I believe the final Nexus 10 numbers will be quite different from what we see now. Let's wait for final production models.

TEGRA 4 - 1st possible GLBenchmark!!!!!!!! - READ ON

Who has been excited by the Tegra 4 rumours?, last night's Nvidia CES announcement was good, but what we really want are cold-hard BENCHMARKS.
I found an interesting mention of Tegra T114 SoC on a Linux Kernel site, which I've never heard of. I got really interested when it stated that the SoC is based on ARM A15 MP, it must be Tegra 4. I checked the background of the person who posted the kernel patch, he is a senior Nvidia Kernel engineer based in Finland.
https://lkml.org/lkml/2012/12/20/99
"This patchset adds initial support for the NVIDIA's new Tegra 114
SoC (T114) based on the ARM Cortex-A15 MP. It has the minimal support
to allow the kernel to boot up into shell console. This can be used as
a basis for adding other device drivers for this SoC. Currently there
are 2 evaluation boards available, "Dalmore" and "Pluto"."
On the off chance I decided to search www.glbenchmark.com for the 2 board names, Dalmore (a tasty whisky!) and Pluto (Planet, Greek God and cartoon dog!) Pluto returned nothing, but Dalmore returned a device called 'Dalmore Dalmore' that was posted on 3rd January 2013. However the OP had already deleted them, but thanks to Google Cache I found the results
RESULTS
GL_VENDOR NVIDIA Corporation
GL_VERSION OpenGL ES 2.0 17.01235
GL_RENDERER NVIDIA Tegra
From System spec, It runs Android 4.2.1, a Min frequency of 51 MHz and Max of 1836 Mhz
Nvidia DALMORE
GLBenchmark 2.5 Egypt HD C24Z16 - Offscreen (1080p) : 32.6 fps
iPad 4
GLBenchmark 2.5 Egypt HD C24Z16 - Offscreen (1080p): 49.6 fps
CONCLUSION
Anandtech has posted that Tegra 4 doesn't use unified shaders, so it's not based on Kepler. I reckon that if Nvidia had a brand new GPU they would have shouted about it at CES, the results I've found indicate that Tegra 4 is between 1 to 3 times faster than Tegra 3.
BUT, this is not 100% guaranteed to be a Tegra 4 system, but the evidence is strong that it is a T4 development board. If this is correct, we have to figure that it is running beta drivers, Nexus 10 is ~ 10% faster than the Arndale dev board with the same Exynos 5250 SoC. Even if Tegra 4 gets better drivers, it seems like the SGX 544 MP4 in the A6X is still the faster GPU, with Tegra 4 and Mali T604 being an almost equal 2nd. Nvidia has said that T4 is faster than A6X, but the devil is in the detail, in CPU benchmarks I can see that being true, but not for graphics.
UPDATE - Just to add to the feeling that that this legit, the GLBenchmark - System section lists the "android.os.Build.USER" as buildbrain. Buildbrain according to a Nvidia job posting is "Buildbrain is a mission-critical, multi-tier distributed computing system that performs mobile builds and automated tests each day, enabling NVIDIA's high performance development teams across the globe to develop and deliver NVIDIA's mobile product line"
http://jobsearch.naukri.com/job-lis...INEER-Nvidia-Corporation--2-to-4-130812500024
I posted the webcache links to GLBenchmark pages below, if they disappear from cache, I've saved a copy of the webpages, which I can upload, Enjoy
GL BENCHMARK - High Level
http://webcache.googleusercontent.c...p?D=Dalmore+Dalmore+&cd=1&hl=en&ct=clnk&gl=uk
GL BENCHMARK - Low Level
http://webcache.googleusercontent.c...e&testgroup=lowlevel&cd=1&hl=en&ct=clnk&gl=uk
GL BENCHMARK - GL CONFIG
http://webcache.googleusercontent.c...Dalmore&testgroup=gl&cd=1&hl=en&ct=clnk&gl=uk
GL BENCHMARK - EGL CONFIG
http://webcache.googleusercontent.c...almore&testgroup=egl&cd=1&hl=en&ct=clnk&gl=uk
GL BENCHMARK - SYSTEM
http://webcache.googleusercontent.c...ore&testgroup=system&cd=1&hl=en&ct=clnk&gl=uk
OFFSCREEN RESULTS
http://webcache.googleusercontent.c...enchmark.com+dalmore&cd=4&hl=en&ct=clnk&gl=uk
http://www.anandtech.com/show/6550/...00-5th-core-is-a15-28nm-hpm-ue-category-3-lte
Is there any Gpu that could outperform iPad4 before iPad5 comes out? adreno 320, t Mali 604 now tegra 4 aren't near it. Qualcomm won't release anything till q4 I guess, and tegra 4 has released too only thing that is left is I guess is t Mali 658 coming with exynos 5450 (doubtfully when it would release, not sure it will be better )
Looks like apple will hold the crown in future too .
i9100g user said:
Is there any Gpu that could outperform iPad4 before iPad5 comes out? adreno 320, t Mali 604 now tegra 4 aren't near it. Qualcomm won't release anything till q4 I guess, and tegra 4 has released too only thing that is left is I guess is t Mali 658 coming with exynos 5450 (doubtfully when it would release, not sure it will be better )
Looks like apple will hold the crown in future too .
Click to expand...
Click to collapse
There was a great article on Anandtech that tested the power consumption of the Nexus 10's Exynos 5250 SoC, it showed that both the CPU and GPU had a TDP of 4W, making a theoretical SoC TDP of 8W. However when the GPU was being stressed by running a game, they ran a CPU benchmark in the background, the SoC quickly went up to 8W, but the CPU was quickly throttled from 1.7 GHz to just 800 Mhz as the system tried to keep everything at 4W or below, this explained why the Nexus 10 didn't benchmark as well as we wished.
Back to the 5450 which should beat the A6X, trouble is it has double the CPU & GPU cores of the 5250 and is clocked higher, even on a more advanced 28nm process, which will lower power consumption I feel that system will often be throttled because of power and maybe heat concerns, so it looks amazing on paper but may disappoint in reality, and a 5450 in smartphone is going to suffer even more.
So why does Apple have an advantage?, well basically money, for a start mapple fans will pay more for their devices, so they afford to design a big SoC and big batteries that may not be profitable to other companies. Tegra 4 is listed as a 80mm2 chip, iPhone 5 is 96mm2 and A6X is 123mm2, Apple can pack more transistors and reap the GPU performance lead, also they chosen graphics supplier Imagination Technologies have excellent products, Power VR Rogue will only increase Apple's GPU lead. They now have their own chip design team, the benefit for them has been their Swift core is almost as powerful as ARM A15, but seems less power hungry, anyway Apple seems to be happy running slower CPUs compared to Android. Until an Android or WP8 or somebody can achieve Apple's margins they will be able to 'buy' their way to GPU domination, as an Android fan it makes me sad:crying:
32fps is no go...lets hope it's not final
hamdir said:
32fps is no go...lets hope it's not final
Click to expand...
Click to collapse
It needs to, but it will be OK for a new Nexus 7
still faster enough for me, I dont game alot on my nexus 7.
I know I'm taking about phones here ... But the iPhone 5 GPU and adreno 320 are very closely matched
Sent from my Nexus 4 using Tapatalk 2
italia0101 said:
I know I'm taking about phones here ... But the iPhone 5 GPU and adreno 320 are very closely matched
Sent from my Nexus 4 using Tapatalk 2
Click to expand...
Click to collapse
From what I remember the iPhone 5 and the new iPad wiped the floor with Nexus 4 and 10. The ST-Ericsson Nova A9600 is likely to have a PowerVR Rogue GPU. Just can't wait!!
adityak28 said:
From what I remember the iPhone 5 and the new iPad wiped the floor with Nexus 4 and 10. The ST-Ericsson Nova A9600 is likely to have a PowerVR Rogue GPU. Just can't wait!!
Click to expand...
Click to collapse
That isn't true , check glbenchmark , in the off screen test the iPhone scored 91 , the nexus 4 scored 88 ... That ksnt wiping my floors
Sent from my Nexus 10 using Tapatalk HD
Its interesting how even though nvidia chips arent the best we still get the best game graphics because of superior optimization through tegra zone. Not even the a6x is as fully optimized.
Sent from my SAMSUNG-SGH-I727 using xda premium
ian1 said:
Its interesting how even though nvidia chips arent the best we still get the best game graphics because of superior optimization through tegra zone. Not even the a6x is as fully optimized.
Sent from my SAMSUNG-SGH-I727 using xda premium
Click to expand...
Click to collapse
What sort of 'optimisation' do you mean? un optimised games lag that's a big letdown and tegra effects can also be used on other phones too with chain fire 3d I use it and tegra games work without lag with effects and I don't have a tegra device
With a tegra device I am restricted to optimised games mostly
The graphic performance of NVIDIA SoCs is always disappointed, sadly for the VGA dominanting provider on the world.
The first Tegra2, the GPU is a little bit better than SGX540 of GalaxyS a little bit in benchmark, but lacking NEON support.
The second one Tegra 3, the GPU is nearly the same as the old Mali400Mp4 in GALAXY S2/Original Note.
And now it's better but still nothing special and outperformed soon (Adreno 330 and next-gen Mali)
Strongest PowerVR GPUs are always the best, but sadly they are exclusive for Apple only (SGX543 and maybe SGX 554 also, only Sony ,who has the cross-licencing with Apple, has it in PS Vita and in PS Vita only)
tegra optimization porting no longer works using chainfire, this is now a myth
did u manage to try shadowgun thd, zombie driver or horn? the answer is no, games that use t3 sdk for physx and other cpu graphics works can not be forced to work on other devices, equally chainfire is now outdated and no longer updated
now about power vr they are only better in real multicore configuration which is only used by apple and Sony's vita, eating large die area, ie actual multicore each with its own subcores/shaders, if tegra was used in real multi core it would destroy all
finally this is really funny all this doom n gloom because of an early discarded development board benchmark, I dont mean to take away from turbo's thunder and his find but truly its ridiculous the amount of negativity its is collecting before any type of final device benchs
adrena 220 doubled in performance after the ICS update on sensation
t3 doubled the speed of t2 gpu with only 50% the number of shaders so how on earth do you believe only 2x the t3 scores with 600% more shaders!!
do you have any idea how miserable the ps3 performed in its early days? even new desktop GeForces perform much less than expected until the drivers are updated
enough with the FUD! seems this board is full of it nowadays and so little reasoning...
For goodness sake, this isn't final hardware, anything could change. Hung2900 knows nothing, what he stated isn't true. Samsung has licensed PowerVR, it isn't just stuck to Apple, just that Samsung prefers using ARMs GPU solution. Another thing I dislike is how everyone is comparing a GPU in the iPad 4 (SGX554MP4) that will NEVER arrive in a phone compared a Tegra 4 which will arrive in a phone. If you check OP link the benchmark was posted on the 3rd of January with different results (18fps then 33fps), so there is a chance it'll rival the iPad 4. I love Tegra as Nvidia is pushing developers to make more better games for Android compared to the 'geeks' *cough* who prefers benchmark results, whats the point of having a powerful GPU if the OEM isn't pushing developers to create enhance effect games for there chip.
Hamdir is correct about the GPUs, if Tegra 3 was around 50-80% faster than Tegra 2 with just 4 more cores, I can't really imagine it only being 2x faster than Tegra 3. Plus its a 28nm (at around 80mm2 just a bit bigger than Tegra 3, smaller than A6 90mm2) along with the dual memory than single on Tegra 2/3.
Turbotab said:
There was a great article on Anandtech that tested the power consumption of the Nexus 10's Exynos 5250 SoC, it showed that both the CPU and GPU had a TDP of 4W, making a theoretical SoC TDP of 8W. However when the GPU was being stressed by running a game, they ran a CPU benchmark in the background, the SoC quickly went up to 8W, but the CPU was quickly throttled from 1.7 GHz to just 800 Mhz as the system tried to keep everything at 4W or below, this explained why the Nexus 10 didn't benchmark as well as we wished.
Back to the 5450 which should beat the A6X, trouble is it has double the CPU & GPU cores of the 5250 and is clocked higher, even on a more advanced 28nm process, which will lower power consumption I feel that system will often be throttled because of power and maybe heat concerns, so it looks amazing on paper but may disappoint in reality, and a 5450 in smartphone is going to suffer even more.
So why does Apple have an advantage?, well basically money, for a start iSheep will pay more for their devices, so they afford to design a big SoC and big batteries that may not be profitable to other companies. Tegra 4 is listed as a 80mm2 chip, iPhone 5 is 96mm2 and A6X is 123mm2, Apple can pack more transistors and reap the GPU performance lead, also they chosen graphics supplier Imagination Technologies have excellent products, Power VR Rogue will only increase Apple's GPU lead. They now have their own chip design team, the benefit for them has been their Swift core is almost as powerful as ARM A15, but seems less power hungry, anyway Apple seems to be happy running slower CPUs compared to Android. Until an Android or WP8 or somebody can achieve Apple's margins they will be able to 'buy' their way to GPU domination, as an Android fan it makes me sad:crying:
Click to expand...
Click to collapse
Well said mate!
I can understand what you feel, nowdays android players like samsung,nvidia are focusing more on CPU than GPU.
If they won't stop soon and continued to use this strategy they will fail.
GPU will become bottleneck and you will not be able use the cpu at its full potential. (Atleast when gaming)
i have Galaxy S2 exynos 4 1.2Ghz and 400mhz oc mali gpu
In my analysis most modern games like MC4,NFS:MW aren't running at 60FPS at all thats because GPU always have 100% workload and CPU is relaxing there by outputing 50-70% of total CPU workload
I know some games aren't optimize for all android devices as opposed to apple devices but still even high-end android devices has slower gpu (than ipad 4 atleast )
AFAIK, Galaxy SIV is likely to pack T-604 with some tweaks instead of mighty T-658 which is still slower than iPAddle 4
Turbotab said:
There was a great article on Anandtech that tested the power consumption of the Nexus 10's Exynos 5250 SoC, it showed that both the CPU and GPU had a TDP of 4W, making a theoretical SoC TDP of 8W. However when the GPU was being stressed by running a game, they ran a CPU benchmark in the background, the SoC quickly went up to 8W, but the CPU was quickly throttled from 1.7 GHz to just 800 Mhz as the system tried to keep everything at 4W or below, this explained why the Nexus 10 didn't benchmark as well as we wished.
Back to the 5450 which should beat the A6X, trouble is it has double the CPU & GPU cores of the 5250 and is clocked higher, even on a more advanced 28nm process, which will lower power consumption I feel that system will often be throttled because of power and maybe heat concerns, so it looks amazing on paper but may disappoint in reality, and a 5450 in smartphone is going to suffer even more.
So why does Apple have an advantage?, well basically money, for a start iSheep will pay more for their devices, so they afford to design a big SoC and big batteries that may not be profitable to other companies. Tegra 4 is listed as a 80mm2 chip, iPhone 5 is 96mm2 and A6X is 123mm2, Apple can pack more transistors and reap the GPU performance lead, also they chosen graphics supplier Imagination Technologies have excellent products, Power VR Rogue will only increase Apple's GPU lead. They now have their own chip design team, the benefit for them has been their Swift core is almost as powerful as ARM A15, but seems less power hungry, anyway Apple seems to be happy running slower CPUs compared to Android. Until an Android or WP8 or somebody can achieve Apple's margins they will be able to 'buy' their way to GPU domination, as an Android fan it makes me sad:crying:
Click to expand...
Click to collapse
Typical "isheep" reference, unnecessary.
Why does apple have the advantage? Maybe because there semiconductor team is talented and can tie the A6X+PowerVR GPU efficiently. NIVIDA should have focused more on GPU in my opinion as the CPU was already good enough. With these tablets pushing excess of 250+ppi the graphics processor will play a huge role. They put 72 cores in there processor. Excellent. Will the chip ever be optimized to full potential? No. So again they demonstrated a product that sounds good on paper but real world performance might be a different story.
MrPhilo said:
For goodness sake, this isn't final hardware, anything could change. Hung2900 knows nothing, what he stated isn't true. Samsung has licensed PowerVR, it isn't just stuck to Apple, just that Samsung prefers using ARMs GPU solution. Another thing I dislike is how everyone is comparing a GPU in the iPad 4 (SGX554MP4) that will NEVER arrive in a phone compared a Tegra 4 which will arrive in a phone. If you check OP link the benchmark was posted on the 3rd of January with different results (18fps then 33fps), so there is a chance it'll rival the iPad 4. I love Tegra as Nvidia is pushing developers to make more better games for Android compared to the 'geeks' *cough* who prefers benchmark results, whats the point of having a powerful GPU if the OEM isn't pushing developers to create enhance effect games for there chip.
Hamdir is correct about the GPUs, if Tegra 3 was around 50-80% faster than Tegra 2 with just 4 more cores, I can't really imagine it only being 2x faster than Tegra 3. Plus its a 28nm (at around 80mm2 just a bit bigger than Tegra 3, smaller than A6 90mm2) along with the dual memory than single on Tegra 2/3.
Click to expand...
Click to collapse
Firstly please keep it civil, don't go around saying that people know nothing, people's posts always speak volumes. Also calling people geeks, on XDA is that even an insult, next you're be asking what I deadlift:laugh:
My OP was done in the spirit of technical curiosity, and to counter the typical unrealistic expectations of a new product on mainstream sites, e.g. Nvidia will use Kepler tech (which was false), omg Kepler is like GTX 680, Tegra 4 will own the world, people forget that we are still talking about device that can only use a few watts, and must be passively cooled and not a 200+ watt, dual-fan GPU, even though they both now have to power similar resolutions, which is mental.
I both agree and disagree with your view on Nvidia's developer relationship, THD games do look nice, I compared Infinity Blade 2 on iOS vs Dead Trigger 2 on youtube, and Dead Trigger 2 just looked richer, more particle & physics effects, although IF Blade looked sharper at iPad 4 native resolution, one of the few titles to use the A6x's GPU fully.The downside to this relationship is the further fragmentation of the Android ecosystem, as Chainfire's app showed most of the extra effects can run on non Tegra devices.
Now, a 6 times increase in shader, does not automatically mean that games / benchmarks will scale in linear fashion, as other factors such as TMU /ROP throughput can bottleneck performance. Nvidia's Technical Marketing Manager, when interviewed at CES, said that the overall improvement in games / benchmarks will be around 3 to 4 times T3. Ultimately I hope to see Tegra 4 in a new Nexus 7, and if these benchmarks are proved accurate, it wouldn't stop me buying. Overall including the CPU, it would be a massive upgrade over the current N7, all in the space of a year.
At 50 seconds onwards.
https://www.youtube.com/watch?v=iC7A5AmTPi0
iOSecure said:
Typical "isheep" reference, unnecessary.
Why does apple have the advantage? Maybe because there semiconductor team is talented and can tie the A6X+PowerVR GPU efficiently. NIVIDA should have focused more on GPU in my opinion as the CPU was already good enough. With these tablets pushing excess of 250+ppi the graphics processor will play a huge role. They put 72 cores in there processor. Excellent. Will the chip ever be optimized to full potential? No. So again they demonstrated a product that sounds good on paper but real world performance might be a different story.
Click to expand...
Click to collapse
Sorry Steve, this is an Android forum, or where you too busy buffing the scratches out of your iPhone 5 to notice? I have full respect for the talents of Apple's engineers & marketing department, many of its users less so.
hamdir said:
tegra optimization porting no longer works using chainfire, this is now a myth
did u manage to try shadowgun thd, zombie driver or horn? the answer is no, games that use t3 sdk for physx and other cpu graphics works can not be forced to work on other devices, equally chainfire is now outdated and no longer updated
Click to expand...
Click to collapse
Looks like they haven't updated chain fire 3d for a while as a result only t3 games don't work but others do work rip tide gp, dead trigger etc . It's not a myth but it is outdated and only works with ics and tegra 2 compatible games . I think I (might be) unfortunate too but some gameloft games lagged on tegra device that i had, though root solved it too an extent
I am not saying something is superior to something just that my personal experience I might be wrong I may not be
Tbh I think benchmarks don't matter much unless you see some difference in real world usage and I had that problem with tegra in my experience
But we will have to see if the final version is able to push it above Mali t 604 and more importantly sgx544
Turbotab said:
Sorry Steve, this is an Android forum, or where you too busy buffing the scratches out of your iPhone 5 to notice? I have full respect for the talents of Apple's engineers & marketing department, many of its users less so.
Click to expand...
Click to collapse
No I actually own a nexus 4 and ipad mini so I'm pretty neutral in googles/apples ecosystems and not wiping any scratches off my devices.

Categories

Resources