Dear guys and gals,
Found a key for touch prediction that when edited showed a marked improvement in keyboard responsiveness and small item manipulation ie classic desktop, file explorer, etc.
The key is: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\TouchPrediction
Edit key for latency from 8 to 2.
Edit sample time from 8 to 2
Restart
See attached for a edited registry key to inject. Tested on two surfaces with no ill effects.
edit: to answer a few questions: this increases performance on all touch aspects of the device
The most likely ill effect would be a decrease in battery life as the system must poll the touchscreen more often... just be aware. Otherwise, cool find.
Keyboard does seem faster... Does this also affect swiping? it seems like I can swipe in any way and get the full length of the page / app in one swipe?
Haven't noted in marked increase in battery consumption but I will monitor.
Could this improve the home key button, when my surface is on standby it takes about 6 taps to get the surface to wake up.
possibly, I have not tested the mod for that per say,
Dane Reynolds said:
Could this improve the home key button, when my surface is on standby it takes about 6 taps to get the surface to wake up.
Click to expand...
Click to collapse
I also have not noticed an increase in battery usage on my asus vivo tab. Not a surface, but rt.
What would decreasing the values to 1 due? I am assuming the lower the value, the better. Or did it not test well on Surface?
Originally I choose 2 to test the battery draw. However, now that I haven't seen any significant increase in battery usage the drop to 1 can be done.
Dadstar said:
What would decreasing the values to 1 due? I am assuming the lower the value, the better. Or did it not test well on Surface?
Click to expand...
Click to collapse
Is it that easy for all the values? In other words, can all of the registry values be set to 1 to improve performance? Or are all the values a certain number for a reason? Cuz if latency of 1 works better than the original 8, idk why Microsoft would put it at 8 in the first place. Sorry for all the questions. This stuff is interesting to me!
First off all any values are "safe values". Some screens might be of worse quality then others (different manufacturers of parts). Having that value setup to happy medium means all screens act the same. You lower the value you demand that screen reads the inputs faster and more often. Might not be a good idea on some devices.
Not only talking about surface. Remember win8 (especially pro) will go on many different devices.
Also if you set sampling and refresh to low it might start having ghost touches from minimum input that would normally not be visible (oversensitive).
Best to practice and find perfect for you and your device.
ruscik said:
First off all any values are "safe values". Some screens might be of worse quality then others (different manufacturers of parts). Having that value setup to happy medium means all screens act the same. You lower the value you demand that screen reads the inputs faster and more often. Might not be a good idea on some devices.
Not only talking about surface. Remember win8 (especially pro) will go on many different devices.
Also if you set sampling and refresh to low it might start having ghost touches from minimum input that would normally not be visible (oversensitive).
Best to practice and find perfect for you and your device.
Click to expand...
Click to collapse
How about the other regs that don't really have a highest/lowest rate? For example, Disable Hotmail is defaulted at 2. What would changing that to 1 do?
Dadstar said:
How about the other regs that don't really have a highest/lowest rate? For example, Disable Hotmail is defaulted at 2. What would changing that to 1 do?
Click to expand...
Click to collapse
No there is no general rule where lower value is better. Some of the values displayed are "face values" where 2 is 2 like refresh 2 times a second. Some times 2 and 1 have a meaning off or on (like your hotmail). Remember PC reads numbers. Even more sometimes numbers, text or mix you see like 8 or 4 are actually representations of some kind of code for example hex or binary.
If you do not know what the number represents then changing it is a guess and nothing more. Just have a backup copy as fiddling in registry with drivers can have funny side effects. I did make my hd7 think i am touching it everywhere all the time so it hang seconds after boot
Are we sure this does anything at all? In order to test if the differences were psychological, I set the number to a ridiculously high value and it didn't seem to behave any differently.
Yup I found noticeable differences in fine touch control including in the registry,window control, etc.
Wupideedoo said:
Are we sure this does anything at all? In order to test if the differences were psychological, I set the number to a ridiculously high value and it didn't seem to behave any differently.
Click to expand...
Click to collapse
Thanks a lot!
Is 2 a good value in the case of the surface PRO 1 ?
"touch prediction" did prediction, not pooling!
"Latency" = how much milisecond to look ahead
"SampleTime" = the period in milisecond to average your finger's motion
The effect is thus:
Larger "latency" make the pointer overshoot, smaller "latency" make the pointer lag behind (1 - 100 milisecond depending on your system performance).
There's no penalty on your tablet's battery or digitizer's life for turning TouchPrediction off, and you don't need to restart to see the effect. (try finger drawing in MS Paint to see effect)
If your Surface missed touch, then try to cool the back of your tablet. It might be thermal throttling.
Related
I am having an issue whereby my touch screen is overly sensitive. For instance, when using the calculator, I "push" a button and it rapidly enters the number I pressed three times. Or, I'm adding a city to the weather tab and push United States and the next thing I know, it's added Abilene, TX (the first city on the list). Another example is I tap the programs tab icon (it's tab #5 on my Fuze) and end up with it trying to decide if I've pushed music or photos (tabs 7 & 8 that end up near the same position the programs tab was in before I tapped it).
It is not consistent but happens with enough frequency that I thought I had over tweaked the settings somehow. I went into Diamond Tweak and set the TF3D sensitivity and finger pressure settings back to normal. It's still doing it. I checked Advanced Config and it's showing a pressure threshold of 18866, finger pressure of 2908, high scroll speed of 0 and low scroll speed of 14. These all show they are custom settings but I don't recall changing them. TF3D Config has TouchFLO performance 2 enabled but none of the others.
After restarting (I changed something in TF3D config), advanced config is now showing pressure threshold of 34, finger pressure of 2908, high scroll speed of 25 (default) and low scroll speed of 14.
Am I correct that it's overtweaked? What should those settings be in Anvanced config? Has anyone else experienced this? Do I need to consider exchanging my Fuze?
Thanks,
Joe
I can't tell you what the setting should be, but I would def. try a hard reset before returning the device. If that doesn't work its probably hardware.
I have a high Scroll Speed of 25 (which is default)... low scroll speed of 70 (also Default) and my pressure threshold and finger pressure are set Extreemly high.... dunno if that will help but i dont have any screen problems
Is there any speed-governor app for the Xoom that can be configured to lock the CPU to 1000MHz whenever the soft input area is active (or better yet, whenever Graffiti input is active), and/or a way to increase the digitizer sample rate?
Historically, Graffiti has been totally unusable on my Xoom. Literally, so low of a sample rate, and so many errors, that I just couldn't use it. I finally got around to unlocking and reflashing my Xoom to CM10 last night, and locking the CPU to 1000MHz makes it work a lot better... but the accuracy is still a cruel joke compared to even my creaky, old Hero overclocked to 711MHz.
It's pretty sad, actually. On the Hero, the digitizer seems to be reporting samples at least 4-16 times as often, and I can get nearly 100% accuracy without even trying. On the Xoom locked to max speed, it seems to do a tiny bit better than my S3 gets with stock, but the sample rate still appears to be absurdly low compared to what it was on the Hero, and feedback seems to lag the actual touch by at least 100-200ms. On the Hero, feedback was literally instant... stroke, and see the pixels turn white INSTANTLY under my fingertip. On the Xoom (locked to max), they start turning white a fraction of a second after I touch the screen, and I can see the last bit of the stroke render a fraction of a second after I lift my finger away. With the stock Xoom rom, it was more like, "draw the character, and see a jagged impression of it sputter into existence about a half-second later... maybe, MAYBE even getting recognized correctly about 70% of the time".
I'm guessing that either the Xoom's digitizer has a limited sample rate, or something in the kernel or driver is limiting the sample rate... but I'm still trying to find a straight answer somewhere about whether/how you can build a custom kernel without losing your ability to run paid Market apps. Or whether it's even necessary to go to that extreme, as opposed to something like a setting that tells Android to increase the sample rate, or not throttle the CPU when an input area is active, or maybe a way to let something like SetCPU identify "soft input area active" as a profile-triggering condition. I'm also pretty sure that the Xoom's kernel (if not recent versions of Android itself) try to treat the existence of a soft input area as an excuse to massively throttle the CPU, on the theory that it's just displaying a picture of a keyboard and waiting for a blunt press. HOWEVER, I'm SURE there HAS to be an equally-official way of defeating that behavior, if only because it would also screw up Android's ability to handle east Asian input methods.
I customized a luminosity curve for my surface rt, just edit the registry keys.
Download extract and run the attached .reg file and confirm the insertion of the Keys, then reboot the system.
Alert to make a backup of the old keys if you want to restore the previous state.
This change is compatible with all windows 8 but I have tested only on the surface.
Adaptive brightness varies proportionally to the user manual brightness, I suggest to manually adjust the brightness bar to about 20%, if you place the bar at 0 the brightness will be minimal with no automatic adaptation.
Appreciated thanks
New version v2
New improved version.
Try it and tell me if it's okay.
In the zip you will find the normal version and one with more brightness in the dark
I suggest to manually adjust the brightness bar to about 25%
So you have a fix, that's great. But to what? You didn't state what the problem is. Neither did you way what you are doing differently over the default values. Why would me, or anybody else for that matter, want do download this?
Amax said:
So you have a fix, that's great. But to what? You didn't state what the problem is. Neither did you way what you are doing differently over the default values. Why would me, or anybody else for that matter, want do download this?
Click to expand...
Click to collapse
In my surface the luminosity curve does not satisfy me.
The display seemed to have only three levels of brightness, setting an average value (ie on the desk in the room in the morning) adaptivity did not fit values for low-light (night) and lots of light, that is to say the brightness in the dark was not a minimum making it annoying for the view and unnecessary consumption of battery, instead with shaded light levels brightness became easily maximum, with again a waste of battery.
This forced me to move often the brightness bar manually, but now with my calibration does not touch more because it adapts automatically to any light condition.
Also the adaptation of brightness occurred after 3 seconds by the change of light, whereas now changes instantly in 0,1 seconds (100ms).
I like it a lot, just what I was looking for.
I use mostly in low light conditions my surface so it is very useful.
Just one remark: it is too sensitive so it is changing screen brightness very quickly even when I just touching the upper part of the screen and making a little shade on the light sensor...
So I think instead of 1 msec. would be better 3 msec.
Would you please and make a 3. version of the settings with 3 msec.?
Alapar said:
I like it a lot, just what I was looking for.
I use mostly in low light conditions my surface so it is very useful.
Just one remark: it is too sensitive so it is changing screen brightness very quickly even when I just touching the upper part of the screen and making a little shade on the light sensor...
So I think instead of 1 msec. would be better 3 msec.
Would you please and make a 3. version of the settings with 3 msec.?
Click to expand...
Click to collapse
100ms not 1ms! However, this file will change only the time in 300ms
antys86 said:
100ms not 1ms! However, this file will change only the time in 300ms
Click to expand...
Click to collapse
You are right, 100 ms. I was in a hurry. Thanks for the fast response and update
Your v2 seems to be working fine for me. Before applying it I could really tell when my surface was adjusting the screen, after applying it the transitions seems smoother and less abrupt.
I have been looking for documentation about how all of these things really work and the closest thing i could find was this link http://superuser.com/questions/644538/customize-adaptative-brightness-in-windows-8
but they seem to be using a different registry location than what you are and different registry names
regardless your settings seem to work immediately after restarting the sensor service
I believe there is a great deal of confusion or lack of technical explanation available here in the community, when we discuss the how’s, why’s and what’s behind the things we choose to modify in the Android OS in an attempt to squeeze better performance from a very complex operating system. Many of the things I tend to see presented to users are focused on very ineffective and ancient mentalities, pertinent to an older version of the operating system. Much of this is attempted through modifying build properties, and that’s usually about where it stops. My objective here is to describe some of the ins and outs of tuning a mobile operating system such as Android, and looking at it in a different light - not the skin you lay on top of it, but as advanced hardware and software, with many adjustable knobs you can turn for a desired result.
The key players here are, usually, without fail a couple of things alone:
Debloating – which, I suppose, is an effective way to reduce the operating system’s memory footprint. But I would then ask, why not also improve the operating system’s memory management functions?
“Build prop tweaks” – which is a file where you can apply very effective changes like the ones presented in my post_boot file (the only difference being when they are executed, and how they are written out), but most of the “tuning” done here focuses on principles that were only once true and, thereby, mostly irrelevant in today’s latest versions of Android. There are many things within the build.prop that can (and sometimes should) be altered to directly impact the performance of the DVM/JVM. However, this is almost always untouched. Every now and then, somebody will throw a kernel together with some added schedulers, or some merged sound drivers, etc., but there is really little to no change that would effect real time performance.
So, what about the virtual machine? What about the core operating system? – what Android actually is – Linux.
Many of you have been pretty blown back by how effective some simple modifications to just 1 shell file on your system have been at improving your experience as a user. Your PM’s, posts, and comments in my inbox/thread are telling enough about the direct impact on battery life. These are differences you can feel and see, quantify. Because the changes made within that file are directly impacting functional aspects of the hardware, throughput/latency, and most importantly, the device’s memory management (which is so complex, you could literally write a book about it… and books about it do exist – they are very long books).
So, how did we manage to make your device feel like it was reborn with just 1 file and not an entire ROM? That ROM you were on, suddenly was not so stock feeling, right? Not to say those ROMs were stock, they were, indeed, modified. But the core operating system was, for the most part, largely untouched. Maybe you had a little more free RAM because of the debloating but, really, that was about all of the effect that you saw/felt.
My aim here is to talk about, at a medium to in-depth level, what exactly went into that 1 file that turned the performance corner for your device. For the sake of keeping it to the important points, I’ll cover the 3 most important (as titled in my main thread): Your CPU, IO, and RAM (VM). Part 2 will cover IO, and Part 3 with cover the nuts and bolts of the RAM (VM).
Let’s look at a snippet of some code from the portion of the file where most of the CPU tuning is achieved, we’ll use cluster two’s example (bear in mind, the methodology here was used for cluster 1 as well [your smaller cores were treated the same]):
Code:
# configure governor settings for big cluster
echo "interactive" > /sys/devices/system/cpu/cpu4/cpufreq/scaling_governor
echo 1 > /sys/devices/system/cpu/cpu4/cpufreq/interactive/use_sched_load
echo "10000 1536000:40000" > /sys/devices/system/cpu/cpu4/cpufreq/interactive/above_hispeed_delay
echo 20 > /sys/devices/system/cpu/cpu4/cpufreq/interactive/go_hispeed_load
echo 10000 > /sys/devices/system/cpu/cpu4/cpufreq/interactive/timer_rate
echo 633600 > /sys/devices/system/cpu/cpu4/cpufreq/interactive/hispeed_freq
echo 1 > /sys/devices/system/cpu/cpu4/cpufreq/interactive/io_is_busy
echo "40 864000:60 1248000:80 1536000:90" > /sys/devices/system/cpu/cpu4/cpufreq/interactive/target_loads
echo 30000 > /sys/devices/system/cpu/cpu4/cpufreq/interactive/min_sample_time
echo 0 > /sys/devices/system/cpu/cpu4/cpufreq/interactive/max_freq_hysteresis
echo 70 > /sys/devices/system/cpu/cpu4/cpufreq/interactive/gpu_target_load
So what did I do here? Well, let’s start by explaining the governor, and then its modules.
Interactive: the interactive governor, in short, it works based on timers and load (or tasks). Based on load when the timers are ticked and the CPU is polled, the governor decides how to respond to that load, with consideration taken from its tunables. Because of this, interactive can be extremely exact when handling CPU load effectively. If these tunables are dialed in properly, according to usage and hardware capability, what you achieve is maximum throughput for an operation, at a nominal frequency for that specific task, with no effective delay experienced in the UI. Most of the activity seen in an Android ecosystem is short, bursty usage, with the occasional sustained load intensive operations (gaming, web browsing, HD video playback and recording, etc.). Because of this unique user-interaction with the device, the default settings for interactive are, usually, a little too aggressive for a nominal experience – nominal meaning not “over-performing” to complete the task and wasting precious CPU cycles on a system that is not always near an outlet. The interactive tunables:
use_sched_load: when this value is set to 1, the timer windows (polling intervals) for all cores are synchronized. The default is 0. I set this to 1 because it allows evaluation of current system-wide load, rather than core specific. A small, but very important change for the GTS (global task scheduler).
above_hispeed_delay: when the cpu is at or above hispeed_freq, wait this long before increasing frequency. The values called out here will always take priority, no matter how busy the system is. Notice how I tuned this particular setting to allow an unbiased ramp up until 1.53 GHz, which then calls for .4 seconds delay before allowing an increase. I did this to handle the short bursts quickly and efficiently as needed, without impacting target_load (the module, in this way, allows the governor free range and roam according to load, then, is forced to wait if it wants to utilize those super-fast but power-costly speeds up top). However, sustained load (like gaming, or loading web pages) would likely tax the CPU for more than .4 seconds. The default setting here was 20000. You can represent this expression as a single value, followed by a CPU speed and delay for that speed, which is what I did at the 1.53 GHz range. I usually design this around differences in voltage usage per frequency when my objective is more to save power, while sacrificing more performance.
go_hispeed_load: when the CPU is polled and overall load is determined to be above this value (which represents a percentage) immediately increase CPU speed to the speed set in hispeed_freq. Default value here was 99. I changed it to 20. You’ll understand why in a second.
timer_rate: intervals to check CPU load across the system (keep in mind use_sched_load). Default was 20000. I changed it to 10000 to check more often, and reduce the stack up delay the timer rate causes with other tunables.
hispeed_freq: counterpart to go_hispeed_load. Immediately jump to this frequency when that load is achieved. Default here, in Linux, is whatever the max frequency is for the core. So, it would have been 1.8 GHz when load is 99%. I changed this value to the next speed above minimum for both a53 and a57 clusters. The reason I did this was to respond appropriately to tiny bits of thread usage here and there, which minimizes the probability that the CPU will start overstepping. There are a lot of small tasks constantly running, which could allow the 384 MHz frequency to be overwhelmed by some consistent low taxing operation. The trick with this method of approach is to stay just ahead of the activity, ever so slightly, to increase efficiency, while removing latency for those smaller tasks. There is no hit in power by doing this. This principle of approach (on broad and overall scale, even) is how I use interactive to our advantage. I remove its subjective behavior by telling it exactly where to be for a set amount of time based on activity alone. There are no other variables. “When CPU load is xxxx, you will operate within these windows (speeds) alone.”
Keep in mind, with some of this, I am just giving you default values and examples… The original file that LG or Qualcomm or whoever placed in there had done their own weird crap with this stuff that didn’t make any sense whatsoever. Timer intervals were not divisibles of others, there was little logic and reason behind it.
io_is_busy: when this value is set to 1, the interactive governor evaluates IO activity, and attempts to calculate it as expected CPU load. The default value is 0. I always set this to 1, to allow the system to get a more accurate representation of anticipated CPU usage. Again, that “staying ahead of the curve” idea is stressed here in this simple but effective change.
target_loads: a general, objective tuneable. Default is 90. This tells the governor to try to keep the CPU load below this value by increasing frequency until <90 is achieved. This can also be represented as a dynamic expression, which is what I did. In short, mine says “do not increase CPU speeds above 864 MHz unless CPU load is over 60%... do not increase CPU speeds above 1.24 GHz unless CPU load is over 80%” and so on… So, you can see how we are starting to better address the “activity vs. response” computing conundrum a little more precisely. Rather than throw some arbitrary number like 90 out there, I specifically utilize a frequency window with a percentage of system-wide usage or activity. This is ideal, but takes careful dialing in, as hardware is always different. Some processors are a little more efficient, so lower speeds are ok for a given load when compared to another processor. Understanding the capability of your hardware to handle your usage patterns appropriately, is absolutely critical to get this part right – the objective is not to overwork, or underwork, but to do just the right amount of work. Turning small knobs here and there, then watching how much time your CPU spends at a given speed, and comparing that with real time performance characteristics you observe, etc… maybe there is a little more stuttering in that game you play after this last adjustment? OK, make it slightly more aggressive, or let the processor hang out a bit more at those high/moderately high speeds.
min_sample_time: this is an interval which tells the CPU to “wait this long” before scaling back down when you are not at idle. This is to make sure the CPU doesn’t scale down too quickly, only to then have to spin right back up again for the same task. The default here was 80000, which is way too aggressive IMO. Your processor, stock, would hang for nearly a second at each step on its way down. 3/10th of a second is plenty of time for consistent high load, and just right for short, bursty bits of activity. The trick here is balancing response, effectiveness, acceptable drain on power, with consideration to nominal throughput for an execution.
max_freq_hysteresis: this only comes into play when the maximum frequency is hit. This tells the governor to keep the core at the maximum speed for this long, represented in tenths of a second, PLUS min_sample_time. Default value was 3, if I remember correctly. Which means that every time your CPU hit max, it was hanging there for 1.1 seconds arbitrarily, regardless of load.
gpu_target_load: the GPU will scale up if CPU load is above this value. Default is 90. This module attempts to anticipate GPU activity based upon CPU activity. It works in parallel with the GPU’s own governor algorithms and each cluster of cores has its own tunable for this controller.
Standby in the near future for the write up on IO and RAM management.
Very nice write up. While the tunables for various governors are a bit out of my range of expertise, the explanation here almost makes me want to play with them to fine tune my system to my usage.
What would you say is the maximum percentage of battery life one would expect to increase by? I read an article here a few years ago where someone had the idea that regardless of tweak, you won't increase your battery life by more than 2%, which is pretty small. I wasn't sure how accurate this statement was, but I am always up for improving my battery life, although Marshmallow has done wonders for it in comparison to Lollipop.
freeza said:
Very nice write up. While the tunables for various governors are a bit out of my range of expertise, the explanation here almost makes me want to play with them to fine tune my system to my usage.
What would you say is the maximum percentage of battery life one would expect to increase by? I read an article here a few years ago where someone had the idea that regardless of tweak, you won't increase your battery life by more than 2%, which is pretty small. I wasn't sure how accurate this statement was, but I am always up for improving my battery life, although Marshmallow has done wonders for it in comparison to Lollipop.
Click to expand...
Click to collapse
The maximum percentage or battery life “savable” by doing this type of thing is really going to depend on a lot of variables. In testing, it is most important to first establish a baseline, and a valid method to measure your outputs. A model/example would be: I am going to charge my phone to 100%, have it in airplane mode, and only have these 3 apps installed to run the tests. After I charge it, I am going to leave the display on for an hour straight, without interacting with the device, then let it sleep for 4 hours, wake it back up, open and close app#1 150 times, then let it sleep again, etc… maybe for the sake of merely evaluating CPU usage, you would disable the location, turn off auto brightness and set the display at minimum, all for the sake of creating a repeatable environment each time you make a change and want to measure impact of that change, etc… you see where I am going with this. Removing variables to quantify impact of the changes you made, would be critical.
What I am getting at is I would seriously doubt the number this individual threw out (2%) has any real merit, for several reasons.
The first and foremost reason being is that he probably didn’t run a valid set of tests to come to that number. However, even if he did, the potential to save power becomes greater as hardware becomes more and more efficient through technological advances. Chips are not what they were even a year ago in that aspect. The Snapdragon 820, for example, or some of the newer Exynos chips from Samsung – all of which are using the 14nm finf process – are extremely efficient in power management.
Displays, all of these things – there is more noticeable impact over extended periods of time when you are talking about “trim a little here” “save a little there”.
To put this into a different perspective, but where the principle still applies, you can look at how efficiency of engines (cars) are impacted by drivetrains. Your car has a rated HP, and the translated power to the ground is some percentage of that rating. That percentage is constant, no matter how much HP your car has, the drivetrain is only (e.g.) 80% efficient at translating that power directly to the pavement. So, your car has 200 HP, 80% is translated to the pavement, meaning its brake HP is 160 HP. If you increase your cars HP to 1000 (engine rated), it is transferring 800 HP to the pavement. Now, make that drivetrain just 5% more efficient… The difference at 200 (engine rated) HP is only 10 HP… But the Difference at 1000 is 50. You are getting 5 times more bang for your buck.
It is important to note that while the percentage is the same, in that example, it is merely an example. Horsepower in that example could be translated to “extra time off the charger” or not a percentage of battery life, but a percentage of screen on time increased. If it is proportionate, you are talking about maybe an extra 15 or 20 minutes of physical interaction with the device before it needs to be plugged in. Again, these are just examples, but the overall impact can be dramatic on a system that is already doing very well at providing the user a long duration of screen on time before it needs to be connected to a wall.
Another example would be gas mileage... this might be more relevant for what we are talking about. Imagine you have a car that is a big, mean v8, and literally gets 7 mpg. It has been way overdue for an oil change, and the oil is now causing the engine to run just slightly less efficient. Well, the car would likely run out of gas before you even noticed the loss in gas mileage, because it is already an inefficient mechanism when it comes to saving gas.
Take another car, that has a rated MPG of 60. Now imagine it is also overdue for an oil change. I would certainly say that the smallest bit of inefficiency in the engine, added weight to the vehicle, less aerodynamic it is… you will certainly see the effects of that more as its expected distance on a single tank of gas is 650 miles, as opposed to the other big v8, that can only go 80 total.
Imagine these two cars as older and newer technology in processors. The newer technology has greater potential for power saving simply because its baseline is already a fairly efficient platform. A small change will take it a greater distance. That car going 650 miles on one tank of gas… well, you’ll notice if that MPG drops by 3%... because you’ll be filling up at 600 miles.
In summary, if my phone is going to die after 2 hours of being off the charger anyways, because it has a small battery, or it’s display is chewing up 95% of the overall power draw, then yes, you are pretty much wasting your time playing with hardware settings otherwise. But that is not the case anymore. Mobile devices are the opposite – very efficient. Which means there is greater potential to minimize their power consumption by tuning, say, a CPU governor to not overreact to activity initiated by the user.
Again, very subjective statement made about 2%... It means nothing…. 2% could be 20 extra minutes of a phone call, 15 minutes of screen on time… etc. You see my point.
Here's what I've found related to slow scrolling jitter and the touchscreen. When you first open an app, the very first couple slow scrolling swipes produce very smooth screen animation. It will then get jittery but if you exit the app, then reopen, the smoothness will return. Do this experiment in Contacts app to see what I mean.
Now I found this app called "Touch MultiTest" which reads out the touchscreen sample rate as you move your finger on the screen. When you first open it and do a swipe, you see smooth tracking and a solid sample rate reported greater than 120 Hz. However after a couple swipes the dot response becomes jittery and sample rate drops to something around 100 Hz. Closing and reopening the app gets you back to 120 Hz.
So I think this proves the hardware and software touch loop can produce smooth motion, and it's really sampling at 120 Hz. The big question is what exactly degrades after a couple swipes. In the best case it's some driver or software buffer / interrupt handling that degrades. In the worst case it's related to low level hardware issues. I'm hopeful it's software related. By the way somehow Chrome browser always scrolls smoothly with slow swipes. What is Chrome doing differently than all other apps? Just filtering?
Scrappy1 said:
Here's what I've found related to slow scrolling jitter and the touchscreen. When you first open an app, the very first couple slow scrolling swipes produce very smooth screen animation. It will then get jittery but if you exit the app, then reopen, the smoothness will return. Do this experiment in Contacts app to see what I mean.
Now I found this app called "Touch MultiTest" which reads out the touchscreen sample rate as you move your finger on the screen. When you first open it and do a swipe, you see smooth tracking and a solid sample rate reported greater than 120 Hz. However after a couple swipes the dot response becomes jittery and sample rate drops to something around 100 Hz. Closing and reopening the app gets you back to 120 Hz.
So I think this proves the hardware and software touch loop can produce smooth motion, and it's really sampling at 120 Hz. The big question is what exactly degrades after a couple swipes. In the best case it's some driver or software buffer / interrupt handling that degrades. In the worst case it's related to low level hardware issues. I'm hopeful it's software related. By the way somehow Chrome browser always scrolls smoothly with slow swipes. What is Chrome doing differently than all other apps? Just filtering?
Click to expand...
Click to collapse
Have you tried contacting Essential or possibly using their beta feedback form to tell them about your theory/findings?
Our screens sample at 60Hz. We already know this from the AMA's on Reddit. The test app you're using is inaccurate if it reads 120Hz or even 100Hz.
60Hz sampling in of itself shouldn't be a problem either since iPhones (except for the newest ones) sample at 60Hz and everyone knows how smooth they are.
Hopefully there's not some other hardware flaw and it's just Essential's software.
ChronoReverse said:
Our screens sample at 60Hz. We already know this from the AMA's on Reddit. The test app you're using is inaccurate if it reads 120Hz or even 100Hz.
60Hz sampling in of itself shouldn't be a problem either since iPhones (except for the newest ones) sample at 60Hz and everyone knows how smooth they are.
Hopefully there's not some other hardware flaw and it's just Essential's software.
Click to expand...
Click to collapse
I don't put much stock in the AMA response since its so vague and nonspecific and could be referring to screen refresh rate (60 Hz) either intentionally or accidentally.
If new iPads and iPhones sample at 120 Hz, it's entirely possible essential panel is sampling at 120 Hz.
Try using Touchscreen Benchmark to test and you'll be able to verify the actual samples per second. As a point of comparison, the Galaxy S4 samples at 90Hz and the Shield tablet does a whopping 180Hz!
In any case, it's easy to see that it's not refreshing at 100Hz or 120Hz simply by looking at the number of touch samples that actually appear on the screen. Try it on a faster phone and you can see the higher density of touch responses.
Furthermore, you can't reliably discern the sample rate in the first second so trusting the app saying it's 120Hz and dips to 100Hz is even less reliable than the AMA.
ChronoReverse said:
Try using Touchscreen Benchmark to test and you'll be able to verify the actual samples per second. As a point of comparison, the Galaxy S4 samples at 90Hz and the Shield tablet does a whopping 180Hz!
In any case, it's easy to see that it's not refreshing at 100Hz or 120Hz simply by looking at the number of touch samples that actually appear on the screen. Try it on a faster phone and you can see the higher density of touch responses.
Furthermore, you can't reliably discern the sample rate in the first second so trusting the app saying it's 120Hz and dips to 100Hz is even less reliable than the AMA.
Click to expand...
Click to collapse
I invite anyone to do my test and decide for themselves or measure and produce new data. That's what I'm going for here. Not regurgitation of bland statements.
Scrappy1 said:
I invite anyone to do my test and decide for themselves or measure and produce new data. That's what I'm going for here. Not regurgitation of bland statements.
Click to expand...
Click to collapse
I just invited you to use a different test instead of relying on one that doesn't spit out reasonable numbers.
Does it make more sense that the Essential potentially is using a 120Hz touchscreen which Essential won't confirm despite it being a feather in their caps (since even iPhones only got 120Hz recently) or does it make more sense that Essential is using a slower than average (for Android) panel which their software isn't filtering out as well as Apple's software does? Which is more likely to cause jitter and touch latency?
ChronoReverse said:
I just invited you to use a different test instead of relying on one that doesn't spit out reasonable numbers.
Does it make more sense that the Essential potentially is using a 120Hz touchscreen which Essential won't confirm despite it being a feather in their caps (since even iPhones only got 120Hz recently) or does it make more sense that Essential is using a slower than average (for Android) panel which their software isn't filtering out as well as Apple's software does? Which is more likely to cause jitter and touch latency?
Click to expand...
Click to collapse
It's actually that your misunderstanding terminology...
Your mistaking sample rate and refresh rate...
Refresh rate is how many times per second? the screen is redrawn...
Sample rate is how many times per second? the screen reads touches...
No way you can tell the difference between 120hz vs 100hz.
Sent from my PH-1 using Tapatalk
rignfool said:
It's actually that your misunderstanding terminology...
Your mistaking sample rate and refresh rate...
Refresh rate is how many times per second? the screen is redrawn...
Sample rate is how many times per second? the screen reads touches...
Click to expand...
Click to collapse
No, I'm referring to the touchscreen. Obviously the Essential LCD only refreshes at 60Hz (only the Razer and iPad Pro refreshes at 120Hz) but the touchscreen also samples at 60Hz which is common for lower end Androids (90Hz and 120Hz are the other common sampling rates found in Android devices).
The new iPhone X's OLED still refreshes at 60Hz but has a 120Hz sampling touchscreen which is higher than the 60Hz it used to be in other iOS devices (except for the iPad Pro). I also mentioned the Shield tablet sampling at 180Hz and there's no mobile device with a screen refresh that fast either.
LNJ said:
No way you can tell the difference between 120hz vs 100hz.
Click to expand...
Click to collapse
The drop to 100 Hz after a couple of seconds is "indicative of the problem", not that a 100 Hz rate would not be smooth in a properly designed device. Something comes unhinged at the point we see the drop to 100 Hz. Could be touch buffer / event que is not being serviced fast enough due to low level driver or hardware. Also could be some piece of software in critical path starts consuming more time than allowed, leading to non uniform response. Could be actual stuttering of hardware.
When you exit and then restart an app, the touch event pipleline is flushed, so things are fixed again for a couple of seconds.
YouTube app
Scrappy1 said:
Here's what I've found related to slow scrolling jitter and the touchscreen. When you first open an app, the very first couple slow scrolling swipes produce very smooth screen animation. It will then get jittery but if you exit the app, then reopen, the smoothness will return. Do this experiment in Contacts app to see what I mean.
Now I found this app called "Touch MultiTest" which reads out the touchscreen sample rate as you move your finger on the screen. When you first open it and do a swipe, you see smooth tracking and a solid sample rate reported greater than 120 Hz. However after a couple swipes the dot response becomes jittery and sample rate drops to something around 100 Hz. Closing and reopening the app gets you back to 120 Hz.
So I think this proves the hardware and software touch loop can produce smooth motion, and it's really sampling at 120 Hz. The big question is what exactly degrades after a couple swipes. In the best case it's some driver or software buffer / interrupt handling that degrades. In the worst case it's related to low level hardware issues. I'm hopeful it's software related. By the way somehow Chrome browser always scrolls smoothly with slow swipes. What is Chrome doing differently than all other apps? Just filtering?
Click to expand...
Click to collapse
I have noticed that if you launch the camera and then open the YouTube app or whatever you're using where you can see the touch scrolling jitters, the touch scrolling is nice and smooth. Then after some time it comes back. The touch scrolling in Chrome is perfect and I wish it was the same everywhere. For some reason the YouTube app performs the worst for me. Chrome must have received an update a while back since I used to get bad touch scrolling on that too. The thing that worries me is some claim touch scrolling is perfectly smooth on their device. Hopefully that's a case of them not noticing it and not a case of actual hardware differences.
mhajii210 said:
I have noticed that if you launch the camera and then open the YouTube app or whatever you're using where you can see the touch scrolling jitters, the touch scrolling is nice and smooth. Then after some time it comes back. The touch scrolling in Chrome is perfect and I wish it was the same everywhere. For some reason the YouTube app performs the worst for me. Chrome must have received an update a while back since I used to get bad touch scrolling on that too. The thing that worries me is some claim touch scrolling is perfectly smooth on their device. Hopefully that's a case of them not noticing it and not a case of actual hardware differences.
Click to expand...
Click to collapse
Cool tip! I hadn't noticed that. Opening camera then switching to contacts had me scrolling smooth for many minutes. However after a few rounds of tests it lost the magic. I could no longer use camera open first to produce the smooth scrolling. So there are several factors at play here and this could use more investigation. Most of all though this gives me hope the issue can be totally fixed in software.
I'm starting to think the thing that goes bad and causes choppiness is the rendering pipeline. I enabled "Profile GPU Rendering" and then did a screen capture after scrolling my battery stats in settings for both 1) good condition just after launching settings when scrolling is smooth and 2) bad condition that kicks in after a few seconds when things get choppy. The bad condition shows vastly inflated rendering time which blows the 60 FPS (green line) budget. The largest increase is in red (command issue), but EVERYTHING is inflated in the bad condition. What could cause this?
The captures of the good and bad conditions are attached.
Turns out the reason the rendering pipeline starts taking so long is due to the application thread moving from high performance CPU cluster to the low performance CPU cluster. Using the paid version of System Monitor I opened a floating window of CPU load and freq. I then again opened battery settings and scrolled around in the good and bad state. I can see the CPU load is on the high performance cluster right away (5-8) and those guys are running at 2.4 GHz. Hence everything is smooth. When the jitters set in, the load has moved to low performance cluster (1-4) and they are running much lower clock rate < 1 GHz. I do believe this is probably fairly normal android behavior, but it's obviously tied to the slow scrolling jitters for us. It could be a subtle governor or big.LITTLE thread scheduling issue somehow playing into touch screen weirdness I suppose.
The two captures attached show the issue. One was captured right after launching battery settings when things are smooth and CPUs 5-8 are screaming. Other was captured after things went jittery, and here you can see CPU load that was on 5-8 has moved to 1-4, and clock frequency is much lower. (Hovers between 300 - 1000 Mhz)
Scrappy1 said:
Turns out the reason the rendering pipeline starts taking so long is due to the application thread moving from high performance CPU cluster to the low performance CPU cluster. Using the paid version of System Monitor I opened a floating window of CPU load and freq. I then again opened battery settings and scrolled around in the good and bad state. I can see the CPU load is on the high performance cluster right away (5-8) and those guys are running at 2.4 GHz. Hence everything is smooth. When the jitters set in, the load has moved to low performance cluster (1-4) and they are running much lower clock rate < 1 GHz. I do believe this is probably fairly normal android behavior, but it's obviously tied to the slow scrolling jitters for us. It could be a subtle governor or big.LITTLE thread scheduling issue somehow playing into touch screen weirdness I suppose.
The two captures attached show the issue. One was captured right after launching battery settings when things are smooth and CPUs 5-8 are screaming. Other was captured after things went jittery, and here you can see CPU load that was on 5-8 has moved to 1-4, and clock frequency is much lower. (Hovers between 300 - 1000 Mhz)
Click to expand...
Click to collapse
Let's try this
@DespairFactor
GPU governor
rignfool said:
Let's try this
@DespairFactor
Click to expand...
Click to collapse
Well I can tell you it's not all because of the CPU performance since setting GPU governor to performance on Oreo beta 2 completely gets rid of the touch screen jitters for me. I'm running Oreo beta 2, Rey.R3 Kernel and Magisk 15.2. Using EX Kernel Manager to set GPU governor to performance, I have eliminated the touch scrolling microstutters. Try it out for yourself and see! I also set CPU governor to conservative to compensate for the slightly increased battery usage. Phone is blazing now. https://forum.xda-developers.com/essential-phone/development/kernel-rey-kernel-t3723601 is the link to the kernel.
mhajii210 said:
Well I can tell you it's not all because of the CPU performance since setting GPU governor to performance on Oreo beta 2 completely gets rid of the touch screen jitters for me. I'm running Oreo beta 2, Rey.R3 Kernel and Magisk 15.2. Using EX Kernel Manager to set GPU governor to performance, I have eliminated the touch scrolling microstutters. Try it out for yourself and see! I also set CPU governor to conservative to compensate for the slightly increased battery usage. Phone is blazing now. https://forum.xda-developers.com/essential-phone/development/kernel-rey-kernel-t3723601 is the link to the kernel.
Click to expand...
Click to collapse
Thanks for your input! I would go down the root and tweaks path if I didn't have to use my phone for work with the Google device policy and all. Hoping for some jitter improvement in next official stock update.
rignfool said:
Let's try this
@DespairFactor
Click to expand...
Click to collapse
I think we can move the touchscreen to it's own workqueue, but not sure if it'll handle this.
mhajii210 said:
Well I can tell you it's not all because of the CPU performance since setting GPU governor to performance on Oreo beta 2 completely gets rid of the touch screen jitters for me. I'm running Oreo beta 2, Rey.R3 Kernel and Magisk 15.2. Using EX Kernel Manager to set GPU governor to performance, I have eliminated the touch scrolling microstutters. Try it out for yourself and see! I also set CPU governor to conservative to compensate for the slightly increased battery usage. Phone is blazing now. https://forum.xda-developers.com/essential-phone/development/kernel-rey-kernel-t3723601 is the link to the kernel.
Click to expand...
Click to collapse
Post a video. In all likelihood, it's just placebo effect. I've heard time and time again people claiming that that the slow-scrolling stutter is gone. It's never once been proven. Here's a side-by-side comparison vs the Pixel XL.