Thermal problem - Galaxy Camera Android Development

Hallo all,
I'm using a Galaxy Camera for a commercial application, as a remote camera controlled via a TCP socket by a remote computer.
Everything is working -quite- fine, but I face a problem.
What I do is to collect camera preview frames (the frame you can also see on the camera screen), compress them to a jpeg format (they are given you as Boitmaps) and then ship them on the net.
The compression algorithm is quite heavy, even using internal Android APIs.
For this reason, the camera became hot in a really short time, and then thermal throttle kicks in.
With thermal throttle active, the CPU frequency drops down to about 800Mhz, and thus my application slow down and I cannot sustain anymore the needed compression/streaming flux.
Is there any way I can disable, modify, soften the Thermal throttle on this device?
Is there any file I can edit in manner to raise the threshold, or make the thermal throttle less aggressive?
Thermal throttle kicks in at around 48°C (calculated...), so I still have a large margin of safety.
bear also in mind I'm not using the battery anymore; it has been substituted by a fake battery, an aluminum block with the right dimensions, with a PCB where the battery contacts are, and an external stabilized power supply.
Please help me!
Ciao,
Giovanni

Related

conservative cpu governor up/down thresholds, and their defaults

I did an experiment with some interesting results. It started out as my beginner's attempt to compare two kernels.
It evolved into providing insight (I think) regarding up/down threshhold parameters for the "conservative" cpu governor
If you dont’ want to read the whole thing, jump to the conclusion section posted at the end.
Phone configuration – installed qkster’s UCLB3 with AT&T bloatware removed, added custom kernel, rooted.
To remove variable cpu loads, I turned wifi/data off and turned off the continuously-running programs that I have installed myself (Power Tutor and Tasker).
To create a steady cpu load, I started the program “relax and sleep” (calm background noise program, available for free). I checked one audio channel in the program, and pushed back button to place program in the background, still creating noise. (I think relax and sleep is a good choice of program for cpu testing in general because it allows to check variable number of channels which does put variable cpu load.. although in this case I only used only one channel.. and note that you cannot recreate this experiment if you use mp3 music app instead, because it uses much less cpu than one relax and sleep channel)
Then I started setcpu for monitoring and experimentation. Repeated with several different kernels.
Results with Entropy’s daily driver kernel.
I set the test setup in setcpu to conservative governor, Fmax = 1200, Fmin = 100, i/o scheduler = noop.
The cpu frequency in Mhz now has the following pattern:
800, 1000,800, 1000, repeat ... (frequency changing approx once per second)
Results with with Zen Infusion-Z A/1600 kernel.
I set the test setup in setcpu to conservative governor, Fmax = 1200, Fmin = 100, i/o scheduler = noop (same as before, intending to compare the performance of the kernels).
The cpu frequency in Mhz now has the following pattern:
1200 (constant).
ok. On the surface one might conclude Entropy’s kernel is somehow handling the load better without ratcheting up the frequency. But the story gets more interesting than that:..
Next I tried Zen’s same kernel Infusion Z/1600 with everything the same except change the cpu governor from “conservative” to “on-demand “
The cpu frequency in Mhz now has the following pattern:
200, 400, 200, 400, 200, 400, 200, 400, 200, 400, 200, 400, 200, 400, 200, 400, 1200 repeat
(changing about once per second, mostly 200, 400, pops into 1200 only very infrequently).
But wait! The "conservative" governor is supposed to be better on the battery than the on-demand governor, and yet for the exact same conditions, we’re gettings higher cpu frequency (1200 constant) with conservative than with on-demand (mostly 200/400 with occasionaly 1200). It’s the exact opposite of how it's supposed to be. Surprising, don’t you think!!???!!
So now let’s look at some other governor settings that don't seem to get much attention.
Go to “governor” page of setcpu with “conservative” selected on the main page. The following values appear repeatably after kernel installation for each kernel, so I am ASSUMING these are default values provided in the kernels themselves (open to comment if I have somehow come to the wrong conclusion)
For Zen’s Infusion-Z (A or B, 1600 or 1400)
up threshold = 80
down threshold =20
(also freq step = 5, sampling rate = 78124 although I don’t think these are important for this post)
For Entropy’s DD
up threshold = 50
down threshold =35
(also freq step = 20, sampling rate = 40000 although I don’t think these are important for this post)
(both kernels have sampling down factor = 1, ignore nice load = 0).
I think we can explain my "experimental" results by examining the above up and down thresholds and making some assumptions about the nature of the load (my assumptions are admittedly contrived in attempt to explain these observations, but they seem reasonable to me).
I ASSUME the steady cpu load I have created in my steup varies in the range 350-400 Mhz quasi-steady state (not perfectly constant due to other processes jumping up in the background).
I ASSUME that before the steady cpu load is reached, there is a temporary increase in cpu loading to 700Mhz or more associated with me flipping screens around to get from the relax and sleep appliation to the setcpu application. Within several seconds, this temporary increase will be gone and only the quasi-steady portion 350-400 Mhz remains.
First look at performance of Zen's Infusion-Z A/1600 while in conservative with default settings in the above experiment. That initial spike of 700Mhz load was enough to get us above the up-threshhold of the 800M-hz level (80%*800Mhz = 640Mhz) and push us to 1200M-hz (1200 comes after 800 in progresion for Zen A, which has no 1000). Once we got to 1,200Mhz, we are NEVER going to get down from there until we reach a load corresponding to the down-threshhold of that level which is 240Mhz (20%*1200Mhz = 240Mhz). And with my relax and sleep application running at 350-400Mhz, it won't happen. That is quite a depressing thing to think – I could put my relax and sleep on for an hour as backgorund noise, and my cpu would be buzzing at 1,200Mhz even though the load is only 350-400Mhz.
This seems very undesriable for battery life.
Now lets look at performance of Entropy’s daily driver in conservative/default setings in the above experiment. The postulated 350-400Mhz cpu load occasionally exceeds the up threshhold of 800Mhz (50%*800Mhz = 400) and once in 1000Mhz occasionally drops below dropout setting of the 1000M-hz (35%*1000Mhz = 350). (And now you know why I postulated 350-400). I have two comments about these Entropy results. The first is minor/tangential, the second more important.
1 - First comment (minor/detour) has to do with cycling between different cpu frequencies which is created by the governor (not the load). I don't think it's any problem at all, but this type of cycling is more likely to occur when the diferences between adjacent frequencies are large. For example let’s say the cpu load was rock solid pure steady state (not varying) at 250Mhz. The up threshhold for 400Mhz setting is 200 (50%*200) while the down threshhold for 800Mhz is 280 (35%*800). So we have postulated a situation where the cpu demand is pure steady state (250Mhz), yet the governor will never find a steady state solution... if it’s in 400Mhz it wants to upshift and if it’s in 800Mhz it wants to downshift. Again I don’t think it’s a problem (it's probably ok to let the two frequencies time-share back and forth) but there is a strategy to avoid it if we want to avoid it, as follows. Considering the highest possible ratio between adjacent frequencies (for these kernels) is 2.0, then we should set things so the ratio of (UpThreshold / DownThreshold) > 2.0 in order to avoid this cycling (which is probably not a problem, more later)
2 – Second comment is more important because it relates to battery usage (as I percieve it). Original postulated load that explains this experiment results is 350Mhz-400Mhz. Yet the cpu is running at 800-1000Mhz. Twice as high. That’s wasting some battery I think.
To summarize results so far, it seems to me that Zen’s kernel default thresholds have potential to waste battery due to low down threshhold (20%), which can keep it at high CPU rate forever, even though the load has decreased substantially. In theory we could be setting the cpu almost 5 times as it needs to be in the situation where steady load decreases to just above 20% of the higher level.. The Entropy’s kernel default threshhold have potential to waste battery due to the low up threshold (50%). In theory we can be setting the cpu almost twice as fast as it needs to be in the situation where the steady load increases to just above 50% of the lower level. Entropy's kernel defaults also create the potential for continuos cycling between frequencies even in the presence a of perfectly steady cpu load, since ratio up/down is less than 2 (I don’t think that's a problem, the only reason I mention cycling is because it feeds into my strategy for selecting down threshold - see below).
So what settings should we use for up/down thresholds? Actually I haven't done my complete due diligence in searching before posting this thread, if someone has a good link with recommendations and/or discussion on this subject I'd be interested. I have seen the generic xda thread on governors and I don't think it was covered there in terms of specific recommended values. Here's my thought process fwiw:. Higher is better for battery on both numbers (at some possible expense of performance). I remember seeing on other sites a default up threshhold of 95% (listed, but not discussed). That makes sense to me for battery saving... shift up at the last minute. Perhaps this high value gives small slowing of response to demand increase, but I don’t think it’s much slower (especially for rapid cpu load increase which is the most critical for response...rapid increase means short time to get from 50% to 95%... short time means not a big response penalty difference) and certainly it seems worthwhile to try to strive for an efficient operating point in long-term steady state. Additionally, we're talking about the "conservative" governor which is supposed to favor battery (we can set up a setcpu profile to invoke on-demand or interactiveX in situations when we want more responsiveness and don't care as much about battery, at least these are availble in Zen's). I don’t recall seeing any number for down threshhold, but should be high as possible again to save battery. How high? I don't know. The only way to put a limit I can think of it is to impose an arbitrary (maybe unnecessary) requirement that we don't want any cycling in pure steady state as discussed above. This means we need down threshhold at least a factor of 2 below up. So I pick 95/2, rounded down to nearest round nubmer of 45%. There may be further improvements if we drop that requirement to avoid cycling and allow even higher down-threshhold, but at least we know the down threshhold of 45% would seem to have moved in the right direction for battery from the defaults. So up/down 95/45 is my pick for now.
Using conservative governor with 95% up threshold and 45% down threshhold (still noop i/o) in above conditions on Zen’s kernel, I’m seeing frequency pattern
400, 400, 400, 400, 400, 400, 400, 400, 800, repeat
in other words mostly 400, intermittent jump to 800.
Certainly the up/down 95/45 settings for conservative governor perform better batterywise than the default settings for conservative governor given in both kernels for this one experiment. To me, it seems very reasonable to expect it to also be better batterywise accross a wide range of expected operations, but it's open to comment.
Small detour - why did we do better batterywise on Zen's on-demand default settings than on Zen's conservative default settings for this particular cpu loading?. The settings for Zen's on-demand default include an up threshhold of 95% and no down threshhold. So on-demand governor apparenlty finds some other way to shift down. Since the 20% down threshold that was causing the problem in Zen's conservative governor default settings is not present in the on-demand governor... that probably explains why the on-demand didn't get hung up at the higher level and performed better afterwards. Another thing to note, if 95% up threshhold is responsive enough for the on-demand, it should surely be responsive enough for the for the conservative...supports the previous sugestion to increase conservative up threshhold to 95%.
CONCLUSIONS:
There is only thing in this entire thread that I am completely 100% positive about, and it is that Zen and Entropy know lightyears more about this stuff than me. In fact, that is the very reason that I was extremely careful to record the as-found default settings in order to preserve any intelligence that went into those defaults before I started tweaking.
So I can reach one of two conclusions:
#1 – I am completely misunderstanding how this conservative cpu governor works
or...?
#2 – The developers never intended for the “default” values to be used, instead they envisioned the users would adjust them as needed.
In the event that #2 were correct, then it would seem logical for battery-concious users to tweak these up/down threshhold settings of the conservative governor. My thoughts would be to set them to 95/45 by the logic above.... may or may not be considering all relevant factors. I'm open to thoughts and comments....
In the above analysis, I have assumed that power consumed by the cpu can be predicted from the cpu frequency (for a given voltage setting of course).
I now believe that assumption might be incorrect.
The reason I believe it is false is a result of another experiment I just did.
I set the cpu governor to performance to maintain constant 1200Mhz.
Then I looked at the cpu power usage trace in "Power Tutor" program.
I expected to see power attributed to cpu usage as constant, but it was varying up and down.
And by moving the homescreens around, I could create a dramatic and predictable increase in power usage of cpu (as indicated on Power Tutor).
All of this change in power consumption of the cpu occurred while the cpu governor was in performance mode with cpu frequency constant at1200Mhz.
I didn't expect that. I can't really explain it (can anyone else?). But clearly there is more to the story than I thought (assuming that the cpu power usage reported by the Power Tutor is correct, which I'm not sure of either).
To evaluate the battery friendliness of variouis governor settings, it might be more useful to watch the power tutor results when performing the above experiments, instead of just watching the cpu frequency as I did before.
Over my head...
How does the battery cycle pan out?
It would be nice if you or someone else had a spare phone to test this battery consumption theory.
I would wonder if the report of consumption is also correct.
The bottom line that I would be interested in seeing is how long can the phone, running a certain kernel and governor, last.
For example: Charge to 100%. Take off charger. Wifi off, Cell data off and in Airplane mode (removing signal variable).
Then run kernel with governor - record the battery duration.
e.pete - nice work! I appreciate your empirical approach to this topic.
I can add the following: you have described accurately how the conservative governor works. For OnDemand, the governor behavior results in the CPU at max under load and minimum when idle, with a smaller amount of time being spent at the steps in between based on thresholds. On my phone today with OnDemand, for example, I'm at 1600mhz 6% of the time, 100mhz 5% of the time, and 800mhz 2% of the time. Deep sleep is 83% and the other CPU frequencies are all below 0.5%.
In general use, conservative should be kinder to the battery and to the hardware. The recommendations I have seen for Linux platforms is to use conservative where battery life matters and ondemand when there is a constant external source of power (i.e. PC or server). Of course, actual use determines how the governor performs. Most smartphones have a lot going on even when the screen is off. A good indicator of average CPU use over the course of a day is CPUSpy. This app, combined with a decent battery monitor can help tell the story from a macro/whole system perspective over time. On the "be kind to hardware" topic, conservative should increment and decrement to adjacent frequencies based on load. This behavior might be happening too fast for SetCPU or other realtime monitor to capture..that's where CPUSpy can show what is happening over a larger period of time. These more gradual transitions may result in less wear and tear on the phone hardware, but I have not seen any significant evidence that this is a factor in the usual life span of a smartphone. (On the flip side, setting the governor to performance and OC to max setting...that is NOT recommended and could harm the phone).
That said, the two kernels you tested have the following default characteristics:
Entropy DD - Conservative/BFQ - Optimized for stability and battery life
Infusion (Bedwa/Zen) - OnDemand/CFQ - Optimized for performance
The Infusion kernels do not include optimization settings for conservative. As you surmised, the expectation is that if you are going to change these settings you have some idea of what you are aiming for and will adjust accordingly.
If battery life is your aim, I've found that the best savings are realized in optimizing transition to sleep when the phone is not being used, minimizing the number of apps that attempt to keep the phone awake, and being selective in your use of wifi and data radios (although too much mucking around with this last option can lead to triggering some of the known bugs in these kernels which manifest as a higher than normal Android OS or Dialer/RILD drains - as seen on the standard battery usage screen in Settings).
On this last topic, there's another thread (which I see you have visited) which covers discussion and work on these known battery drain anomalies: http://forum.xda-developers.com/showthread.php?t=1408433
Here is some additional information on governors courtesy of Big Blue... http://publib.boulder.ibm.com/infoc...?topic=/liaai/cpufreq/TheOndemandGovernor.htm
And a bit more info ... https://wiki.archlinux.org/index.php/CPU_Frequency_Scaling
Truckerglenn said:
Over my head...
Click to expand...
Click to collapse
:what: I'm so glad there are people much smarter then myself here. Great work electricpete :thumbup: even if I followed about half of it
Sent from a de-FUNKt Infuse
Pete - Here's a link to a thread that has a lot of information about governors, i/o schedulers, tweaks, scripts, and kernel objects:
http://forum.xda-developers.com/showthread.php?t=1369817
Thanks everyone, a lot of good info.
Especially Zen, very useful info and links.
It was a very interesting comment about certain governor strategies being hardware unfriendly if they jump increase the cpu from min to max.
I never realized that was a factor (only thought the max speed was important).
But it definitely sounds plausible. Maybe (?) the rapid temperature increase causes uneven temperature (cpu gets hot before attached plate gets hot) and therefore uneven thermal expansion, which causes mechanical stresses. I can imagine there are other subtle aspect of the cycling up/down that can be important. If you have any more info or links on the effects of cpu governor strategy upon hardware life readily available, I'd be very interested to hear it. (If not readily available, that's fine too, I will do some googling).
I did find this link which suggested it's better for the hardware life to run at 100% (I guess for us that's 1200Mhz) than it is to cycle up/down. It's not written about phones but about PCs. There might be some differences in the technical aspects. There are of course big differences in the priorities for PCs... they don't care about power usage as much as phones do, and pc users probably expect a longer life than phone users do.
http://www.overclockers.com/overclockings-impact-on-cpu-life/
Thanks again.
I will report back if I get some free time to continue experimenting... maybe this weekend.
Primary consideration (as you've noted) with smartphones is battery conservation. ARM processors are engineered to operate at multiple frequency steps, and to turn off where possible. Without this capability phones would need a much higher capacity battery. As for PCs, current processors include frequency stepping technology to reduce power consumption and heat, and perhaps extend life by keeping temps lower.
The main conclusion of the article you referenced (which is 13 years old, btw, but does contain a wealth of good foundational information) is that heat is the primary enemy. This is a major factor with smartphones as they have limited ability to dissipate heat. An Infuse running at 1200mhz (or 1600mhz OCed) confined to a purse or pocket, or (as reported in these forums a while back) under a pillow gets hot very quickly. This will lead to conditions that will harm the phone. At one time, my phone had an error condition causing the Dialer to go crazy (rild process) and peg the CPU at 100% for an extended period of time, while the phone was also plugged into a charger (thus heat from the charging process too). The end result of this was a temperature that tripped a heat sensor threshold, causing the phone to shut itself off. So there are, at least, limited protections against extreme events.
As I noted above, I've not seen any evidence that the normal (or even OCed) frequency stepping that occurs with smartphones leads to failures within the normal in service period for these devices - 2 to 4 years in most cases. Running at 100% all the time may put your phone's health at risk and will definitely impair your battery life.
Zen - Good points. One thing I do take away from the article (along with your comments) is the cuumulative effect of cycling, So when I settle on up/down threshholds, I may try to avoid putting them too close together in order to avoid extra cycling (keep Max/Min threshhold ratio >2). Although I do realize these particular cycles between two adjacent frequencies are not as bad as bad as the cycle between min and max frequency.
more test results
I have completed some testing using PowerTutor and results reported in attached spreadsheet.
I would say the results only muddy things further. Don’t read any further if you don’t have a tolerance for ambiguity.
SETUP (common to all tests)
In all tests, I had similar setup as in the original post: 1 channel of “relax and sleep” running to create a constant cpu load, and all other continuous-run programs turned off except power tutor.
Some other details common to all tests: Tasker off, Wifi off, data off, Power Tutor on
No uv
Noop I/o governor used throughout.
WHAT CHANGES BETWEEN TESTS:
See the spreadsheet, tab labeled “summary”.
The things that changed between tests are in rows 3 thru 8, labeled “Tested Configuration”
As you can see, between tests, I varied the Up threshhold and Down threshhold. I varied the kernel. I varied the governor (mostly conservative, but performance used).
WHAT WAS RECORDED DURING EACH TEST:
1 – Recorded power usage of CPU, LCD, Audio as reported in Power Tutor over the course of one minute. I converted them to battery %/hr (conversions shown in tab “Notes”) and listed them in rows 12-14
2 – Recorded the actual cpu frequencies seen in setcpu “home” screen, similar to original post and listed these in row 15. I attempted to guess the average frequency over time and put this in row 16.
3 Rows 18-23 are the Quadrant results for the six categories that Quadrant reports (yes, I know people don’t like Quadrant, just recorded as a datapoint)
WHAT PATTERNs EMERGE:
1 – Entropy’s DD and Zen’s Infusion-A use comparable power (as reported by Power Tutor) in this particular experiment.
2 – Zen’s Infusion does better on quadrant score in this particular experiment, when both are set at the same governor configuration (100-1200, conservative), . Not surprising since Zen said he has optimized for performance.
3 – How does the governor frequency affect power usage? This is the muddy part. There is no doubt that if we blindly take the data at face value, then there IS a correlation between cpu frequency and power attributed to the cpu by Power Tutor. However the correlation that emerges from the data is in the opposite direction from what anyone in the world would think: this data suggests that increasing CPU frequency causes decrease in power consumption reported by power tutor. See tab labeled “chart 1” for graphical depiction of this result.
As you can see in the graph, there is not a random spread of results (as would be the case if random unacconted-for errors were at work). There is a definite correlation. What it suggests perhaps is that there can be a systematic error in the way PowerTutor measures power that depends on cpu frequency.... in other words the error itself (between measured and actual) somehow depends on cpu frequency.
So, I am just reporting some results. I am definitely not suggesting anyone overclock to save power (that would be truly bizarre and I’d probably be kicked out of xda for suggesting something so silly).
On the other hand, as stated in the 2nd post of this thread, I’m still very leery of using cpu frequency as an indicator of power that cpu is drawing... because there is just too much going on inside that black box that I don’t know about. For one thing, the cpu itself may draw different amounts of power at a given frequency depending on it’s loading because the registers may not be doing anything at low loads. For another thing, there are a lot of other things in the phone (like RAM, bus) that may draw some power but probably get lumped in with the reported cpu power in Power Tutor and others. Perhaps the cpu is somehow more efficient at interfacing with these others parts of the system when cpu is at high speed, enabling it to reduce the power they draw. The point is, it’s a lot more complicated then I assumed in my first post.
I have heard Entropy mention that before changing kernels we should always reset UV settings (and reboot) and reset other cpu related settings (Fmin and Fmax I assume).
I would like to add another item: always uninstall setcpu before changing kernels and reinstall it after you change.
The reason: I have seen some very weird results of setcpu when I left it installed in between swapping kernels. Like for example cpu running at 1600Mhz even though Fmax is 1200Mhz and there are no profiles allowing 1600. Those weird results are not included in the above data (I observed the frequencies during each trial as reported in the spreadsheet).
Power Tutor has a great interface and very detailed stats available. Seems to have great credentials based on their website.
But I can only conclude we can’t trust it for our particular phone because of results above (power draw goes down as cpu speed goes up) and some other results I have seen (it seems to suggest that power used by my display does not change depending on dark/light backround, and also seems to suggest that power used by the phone does not change when I change the volume of music playing).
So, I’m looking for another way to be able to track power usage closely.
I kind of like qkster’s idea to just watch the battery go down.
I’d like to try to automate that using Tasker. I can write a program which will help me build a log of power usage.
The interface will be:
push a start icon and it prompts me to enter description of conditions that will be tested
wait some period of time (this is the constant-load test period that we're evaluating...may be listening to mp3)
push a stop icon and it prompts me for comments about anything that happened during the test.
At time of pressing the start icon, it will also record from the system:
1 - clock time (in seconds)
2 - voltage in millivolts to 4 digits of resolution (like 3784 millivolts)
3 - battery life remaining in percent to 2 digits of resolution (like 43%)
The same info will be recorded from the system upon pressing the stop icon.
All this info will be appended to a logfile and we can compute drain based on change in battery divided by change in time.
I can get these voltage and % life stats using the method suggested by Brandall’s tutorial here:
http://tasker.wikidot.com/using-linux-shell-with-tasker-for-a-technical-battery-widget
I couldn’t get the grep command to work, but I can still extract the required voltage and percent-life-remaining from the Battery sysdump using the tasker variable splitter command (I’ve already got that part programmed).
No-load voltage has a roughly known relation to battery life, but there’s also the matter of voltage drop accros the internal battery impedance that varies with load at the time of the measurement, so we don’t see no-load voltage, we see something lower which makes the whole thing somewhat variable.
Percent Remaining is the exact thing I want. But it is only given rounded to two digits, (43%). If I wanted to do a trial run listening to a 5-minute MP3 draining something like 12% per hour, the battery drop during that that 5 minutes will be only around 1%... the difference between two kernels or cpu frequency settings will be only a very small fraction of that 1%, so comparing the start and stop values which are both rounded to 1% would introduce an enormous error compared to the thing we're interested in. I can surely reduce that error by working with longer times, but that starts to become a PITA. That may end up being the only solution, but if there's any way to avoid it I'd like to be able to gather data in shorter chunks.
Which leads me to a QUESTION:
Does anyone know whether there is any way to retrieve or estimate “battery % remaining” with greater resolution that two digits (ie 43.26% instead of just 43%)?
unintended consequences from changing up threshhold
I used the following setup for almost a month:
Zen’s Infusion A Kernel (with my stock GB), conservative governor
UpThreshold = 95
DownThreshhold = 45
Only twice during the month, I saw the following:
Received a phone call. I could see the name of the caller. I couldn’t hear the caller. When I finally got hold of them later, they told me they could hear me even though I couldn't hear them.
That was very tough to figure out because it only occurred on two out of probably 30 or 40 phone calls received in a month.
The two phone calls did originate from cell phones in the same geographic area (near my work, an hour away from my home).
Then I had a breakthrough when I set up my work voicemail to automatically call my Android phone. Almost every time it called, the problem appeared (I couldn’t hear the robot voice telling me I had a message).
I kept leaving myself messages to reproduce the problem and narrow it down.
I found out it only occurs when my phone is asleep at time of the call (doesn’t occur if phone is awake at time of the call).
I removed my UV and problem continued.
I adjusted my governor and could make problem go away.
I narrowed it down to the up threshhold.
Repeatable with 95/45 up/down, the problem occurs.
Repeatably with 80/45 up/down, the problem does not occur.
I have gone back and forth between those two settings at least four times and each time it confirms the symptom is directly related to the governor setting.
Exactly why that is I’m not sure. Maybe the cpu is to slow to wake up to handle the call? Sounds kind of hokey, but I guess it doens’t really matter.
The bottom line for me: 80/45 is a great place for me to stay. Eliminates the “can’t hear caller” problem and still does pretty good at preventing cpu from going to high frequency when listening to my relax and sleep program for long periods of time.
If anyone has gone to 95/45 based on my recommendation, you might rethink it, especially if you see unusual behavior.

[Q] Governor app that can set profile for "text input active"?

Is there any speed-governor app for the Xoom that can be configured to lock the CPU to 1000MHz whenever the soft input area is active (or better yet, whenever Graffiti input is active), and/or a way to increase the digitizer sample rate?
Historically, Graffiti has been totally unusable on my Xoom. Literally, so low of a sample rate, and so many errors, that I just couldn't use it. I finally got around to unlocking and reflashing my Xoom to CM10 last night, and locking the CPU to 1000MHz makes it work a lot better... but the accuracy is still a cruel joke compared to even my creaky, old Hero overclocked to 711MHz.
It's pretty sad, actually. On the Hero, the digitizer seems to be reporting samples at least 4-16 times as often, and I can get nearly 100% accuracy without even trying. On the Xoom locked to max speed, it seems to do a tiny bit better than my S3 gets with stock, but the sample rate still appears to be absurdly low compared to what it was on the Hero, and feedback seems to lag the actual touch by at least 100-200ms. On the Hero, feedback was literally instant... stroke, and see the pixels turn white INSTANTLY under my fingertip. On the Xoom (locked to max), they start turning white a fraction of a second after I touch the screen, and I can see the last bit of the stroke render a fraction of a second after I lift my finger away. With the stock Xoom rom, it was more like, "draw the character, and see a jagged impression of it sputter into existence about a half-second later... maybe, MAYBE even getting recognized correctly about 70% of the time".
I'm guessing that either the Xoom's digitizer has a limited sample rate, or something in the kernel or driver is limiting the sample rate... but I'm still trying to find a straight answer somewhere about whether/how you can build a custom kernel without losing your ability to run paid Market apps. Or whether it's even necessary to go to that extreme, as opposed to something like a setting that tells Android to increase the sample rate, or not throttle the CPU when an input area is active, or maybe a way to let something like SetCPU identify "soft input area active" as a profile-triggering condition. I'm also pretty sure that the Xoom's kernel (if not recent versions of Android itself) try to treat the existence of a soft input area as an excuse to massively throttle the CPU, on the theory that it's just displaying a picture of a keyboard and waiting for a blunt press. HOWEVER, I'm SURE there HAS to be an equally-official way of defeating that behavior, if only because it would also screw up Android's ability to handle east Asian input methods.

The Linux Virtual Machine Explained – Part 1, CPU

I believe there is a great deal of confusion or lack of technical explanation available here in the community, when we discuss the how’s, why’s and what’s behind the things we choose to modify in the Android OS in an attempt to squeeze better performance from a very complex operating system. Many of the things I tend to see presented to users are focused on very ineffective and ancient mentalities, pertinent to an older version of the operating system. Much of this is attempted through modifying build properties, and that’s usually about where it stops. My objective here is to describe some of the ins and outs of tuning a mobile operating system such as Android, and looking at it in a different light - not the skin you lay on top of it, but as advanced hardware and software, with many adjustable knobs you can turn for a desired result.
The key players here are, usually, without fail a couple of things alone:
Debloating – which, I suppose, is an effective way to reduce the operating system’s memory footprint. But I would then ask, why not also improve the operating system’s memory management functions?
“Build prop tweaks” – which is a file where you can apply very effective changes like the ones presented in my post_boot file (the only difference being when they are executed, and how they are written out), but most of the “tuning” done here focuses on principles that were only once true and, thereby, mostly irrelevant in today’s latest versions of Android. There are many things within the build.prop that can (and sometimes should) be altered to directly impact the performance of the DVM/JVM. However, this is almost always untouched. Every now and then, somebody will throw a kernel together with some added schedulers, or some merged sound drivers, etc., but there is really little to no change that would effect real time performance.
So, what about the virtual machine? What about the core operating system? – what Android actually is – Linux.
Many of you have been pretty blown back by how effective some simple modifications to just 1 shell file on your system have been at improving your experience as a user. Your PM’s, posts, and comments in my inbox/thread are telling enough about the direct impact on battery life. These are differences you can feel and see, quantify. Because the changes made within that file are directly impacting functional aspects of the hardware, throughput/latency, and most importantly, the device’s memory management (which is so complex, you could literally write a book about it… and books about it do exist – they are very long books).
So, how did we manage to make your device feel like it was reborn with just 1 file and not an entire ROM? That ROM you were on, suddenly was not so stock feeling, right? Not to say those ROMs were stock, they were, indeed, modified. But the core operating system was, for the most part, largely untouched. Maybe you had a little more free RAM because of the debloating but, really, that was about all of the effect that you saw/felt.
My aim here is to talk about, at a medium to in-depth level, what exactly went into that 1 file that turned the performance corner for your device. For the sake of keeping it to the important points, I’ll cover the 3 most important (as titled in my main thread): Your CPU, IO, and RAM (VM). Part 2 will cover IO, and Part 3 with cover the nuts and bolts of the RAM (VM).
Let’s look at a snippet of some code from the portion of the file where most of the CPU tuning is achieved, we’ll use cluster two’s example (bear in mind, the methodology here was used for cluster 1 as well [your smaller cores were treated the same]):
Code:
# configure governor settings for big cluster
echo "interactive" > /sys/devices/system/cpu/cpu4/cpufreq/scaling_governor
echo 1 > /sys/devices/system/cpu/cpu4/cpufreq/interactive/use_sched_load
echo "10000 1536000:40000" > /sys/devices/system/cpu/cpu4/cpufreq/interactive/above_hispeed_delay
echo 20 > /sys/devices/system/cpu/cpu4/cpufreq/interactive/go_hispeed_load
echo 10000 > /sys/devices/system/cpu/cpu4/cpufreq/interactive/timer_rate
echo 633600 > /sys/devices/system/cpu/cpu4/cpufreq/interactive/hispeed_freq
echo 1 > /sys/devices/system/cpu/cpu4/cpufreq/interactive/io_is_busy
echo "40 864000:60 1248000:80 1536000:90" > /sys/devices/system/cpu/cpu4/cpufreq/interactive/target_loads
echo 30000 > /sys/devices/system/cpu/cpu4/cpufreq/interactive/min_sample_time
echo 0 > /sys/devices/system/cpu/cpu4/cpufreq/interactive/max_freq_hysteresis
echo 70 > /sys/devices/system/cpu/cpu4/cpufreq/interactive/gpu_target_load
So what did I do here? Well, let’s start by explaining the governor, and then its modules.
Interactive: the interactive governor, in short, it works based on timers and load (or tasks). Based on load when the timers are ticked and the CPU is polled, the governor decides how to respond to that load, with consideration taken from its tunables. Because of this, interactive can be extremely exact when handling CPU load effectively. If these tunables are dialed in properly, according to usage and hardware capability, what you achieve is maximum throughput for an operation, at a nominal frequency for that specific task, with no effective delay experienced in the UI. Most of the activity seen in an Android ecosystem is short, bursty usage, with the occasional sustained load intensive operations (gaming, web browsing, HD video playback and recording, etc.). Because of this unique user-interaction with the device, the default settings for interactive are, usually, a little too aggressive for a nominal experience – nominal meaning not “over-performing” to complete the task and wasting precious CPU cycles on a system that is not always near an outlet. The interactive tunables:
use_sched_load: when this value is set to 1, the timer windows (polling intervals) for all cores are synchronized. The default is 0. I set this to 1 because it allows evaluation of current system-wide load, rather than core specific. A small, but very important change for the GTS (global task scheduler).
above_hispeed_delay: when the cpu is at or above hispeed_freq, wait this long before increasing frequency. The values called out here will always take priority, no matter how busy the system is. Notice how I tuned this particular setting to allow an unbiased ramp up until 1.53 GHz, which then calls for .4 seconds delay before allowing an increase. I did this to handle the short bursts quickly and efficiently as needed, without impacting target_load (the module, in this way, allows the governor free range and roam according to load, then, is forced to wait if it wants to utilize those super-fast but power-costly speeds up top). However, sustained load (like gaming, or loading web pages) would likely tax the CPU for more than .4 seconds. The default setting here was 20000. You can represent this expression as a single value, followed by a CPU speed and delay for that speed, which is what I did at the 1.53 GHz range. I usually design this around differences in voltage usage per frequency when my objective is more to save power, while sacrificing more performance.
go_hispeed_load: when the CPU is polled and overall load is determined to be above this value (which represents a percentage) immediately increase CPU speed to the speed set in hispeed_freq. Default value here was 99. I changed it to 20. You’ll understand why in a second.
timer_rate: intervals to check CPU load across the system (keep in mind use_sched_load). Default was 20000. I changed it to 10000 to check more often, and reduce the stack up delay the timer rate causes with other tunables.
hispeed_freq: counterpart to go_hispeed_load. Immediately jump to this frequency when that load is achieved. Default here, in Linux, is whatever the max frequency is for the core. So, it would have been 1.8 GHz when load is 99%. I changed this value to the next speed above minimum for both a53 and a57 clusters. The reason I did this was to respond appropriately to tiny bits of thread usage here and there, which minimizes the probability that the CPU will start overstepping. There are a lot of small tasks constantly running, which could allow the 384 MHz frequency to be overwhelmed by some consistent low taxing operation. The trick with this method of approach is to stay just ahead of the activity, ever so slightly, to increase efficiency, while removing latency for those smaller tasks. There is no hit in power by doing this. This principle of approach (on broad and overall scale, even) is how I use interactive to our advantage. I remove its subjective behavior by telling it exactly where to be for a set amount of time based on activity alone. There are no other variables. “When CPU load is xxxx, you will operate within these windows (speeds) alone.”
Keep in mind, with some of this, I am just giving you default values and examples… The original file that LG or Qualcomm or whoever placed in there had done their own weird crap with this stuff that didn’t make any sense whatsoever. Timer intervals were not divisibles of others, there was little logic and reason behind it.
io_is_busy: when this value is set to 1, the interactive governor evaluates IO activity, and attempts to calculate it as expected CPU load. The default value is 0. I always set this to 1, to allow the system to get a more accurate representation of anticipated CPU usage. Again, that “staying ahead of the curve” idea is stressed here in this simple but effective change.
target_loads: a general, objective tuneable. Default is 90. This tells the governor to try to keep the CPU load below this value by increasing frequency until <90 is achieved. This can also be represented as a dynamic expression, which is what I did. In short, mine says “do not increase CPU speeds above 864 MHz unless CPU load is over 60%... do not increase CPU speeds above 1.24 GHz unless CPU load is over 80%” and so on… So, you can see how we are starting to better address the “activity vs. response” computing conundrum a little more precisely. Rather than throw some arbitrary number like 90 out there, I specifically utilize a frequency window with a percentage of system-wide usage or activity. This is ideal, but takes careful dialing in, as hardware is always different. Some processors are a little more efficient, so lower speeds are ok for a given load when compared to another processor. Understanding the capability of your hardware to handle your usage patterns appropriately, is absolutely critical to get this part right – the objective is not to overwork, or underwork, but to do just the right amount of work. Turning small knobs here and there, then watching how much time your CPU spends at a given speed, and comparing that with real time performance characteristics you observe, etc… maybe there is a little more stuttering in that game you play after this last adjustment? OK, make it slightly more aggressive, or let the processor hang out a bit more at those high/moderately high speeds.
min_sample_time: this is an interval which tells the CPU to “wait this long” before scaling back down when you are not at idle. This is to make sure the CPU doesn’t scale down too quickly, only to then have to spin right back up again for the same task. The default here was 80000, which is way too aggressive IMO. Your processor, stock, would hang for nearly a second at each step on its way down. 3/10th of a second is plenty of time for consistent high load, and just right for short, bursty bits of activity. The trick here is balancing response, effectiveness, acceptable drain on power, with consideration to nominal throughput for an execution.
max_freq_hysteresis: this only comes into play when the maximum frequency is hit. This tells the governor to keep the core at the maximum speed for this long, represented in tenths of a second, PLUS min_sample_time. Default value was 3, if I remember correctly. Which means that every time your CPU hit max, it was hanging there for 1.1 seconds arbitrarily, regardless of load.
gpu_target_load: the GPU will scale up if CPU load is above this value. Default is 90. This module attempts to anticipate GPU activity based upon CPU activity. It works in parallel with the GPU’s own governor algorithms and each cluster of cores has its own tunable for this controller.
Standby in the near future for the write up on IO and RAM management.
Very nice write up. While the tunables for various governors are a bit out of my range of expertise, the explanation here almost makes me want to play with them to fine tune my system to my usage.
What would you say is the maximum percentage of battery life one would expect to increase by? I read an article here a few years ago where someone had the idea that regardless of tweak, you won't increase your battery life by more than 2%, which is pretty small. I wasn't sure how accurate this statement was, but I am always up for improving my battery life, although Marshmallow has done wonders for it in comparison to Lollipop.
freeza said:
Very nice write up. While the tunables for various governors are a bit out of my range of expertise, the explanation here almost makes me want to play with them to fine tune my system to my usage.
What would you say is the maximum percentage of battery life one would expect to increase by? I read an article here a few years ago where someone had the idea that regardless of tweak, you won't increase your battery life by more than 2%, which is pretty small. I wasn't sure how accurate this statement was, but I am always up for improving my battery life, although Marshmallow has done wonders for it in comparison to Lollipop.
Click to expand...
Click to collapse
The maximum percentage or battery life “savable” by doing this type of thing is really going to depend on a lot of variables. In testing, it is most important to first establish a baseline, and a valid method to measure your outputs. A model/example would be: I am going to charge my phone to 100%, have it in airplane mode, and only have these 3 apps installed to run the tests. After I charge it, I am going to leave the display on for an hour straight, without interacting with the device, then let it sleep for 4 hours, wake it back up, open and close app#1 150 times, then let it sleep again, etc… maybe for the sake of merely evaluating CPU usage, you would disable the location, turn off auto brightness and set the display at minimum, all for the sake of creating a repeatable environment each time you make a change and want to measure impact of that change, etc… you see where I am going with this. Removing variables to quantify impact of the changes you made, would be critical.
What I am getting at is I would seriously doubt the number this individual threw out (2%) has any real merit, for several reasons.
The first and foremost reason being is that he probably didn’t run a valid set of tests to come to that number. However, even if he did, the potential to save power becomes greater as hardware becomes more and more efficient through technological advances. Chips are not what they were even a year ago in that aspect. The Snapdragon 820, for example, or some of the newer Exynos chips from Samsung – all of which are using the 14nm finf process – are extremely efficient in power management.
Displays, all of these things – there is more noticeable impact over extended periods of time when you are talking about “trim a little here” “save a little there”.
To put this into a different perspective, but where the principle still applies, you can look at how efficiency of engines (cars) are impacted by drivetrains. Your car has a rated HP, and the translated power to the ground is some percentage of that rating. That percentage is constant, no matter how much HP your car has, the drivetrain is only (e.g.) 80% efficient at translating that power directly to the pavement. So, your car has 200 HP, 80% is translated to the pavement, meaning its brake HP is 160 HP. If you increase your cars HP to 1000 (engine rated), it is transferring 800 HP to the pavement. Now, make that drivetrain just 5% more efficient… The difference at 200 (engine rated) HP is only 10 HP… But the Difference at 1000 is 50. You are getting 5 times more bang for your buck.
It is important to note that while the percentage is the same, in that example, it is merely an example. Horsepower in that example could be translated to “extra time off the charger” or not a percentage of battery life, but a percentage of screen on time increased. If it is proportionate, you are talking about maybe an extra 15 or 20 minutes of physical interaction with the device before it needs to be plugged in. Again, these are just examples, but the overall impact can be dramatic on a system that is already doing very well at providing the user a long duration of screen on time before it needs to be connected to a wall.
Another example would be gas mileage... this might be more relevant for what we are talking about. Imagine you have a car that is a big, mean v8, and literally gets 7 mpg. It has been way overdue for an oil change, and the oil is now causing the engine to run just slightly less efficient. Well, the car would likely run out of gas before you even noticed the loss in gas mileage, because it is already an inefficient mechanism when it comes to saving gas.
Take another car, that has a rated MPG of 60. Now imagine it is also overdue for an oil change. I would certainly say that the smallest bit of inefficiency in the engine, added weight to the vehicle, less aerodynamic it is… you will certainly see the effects of that more as its expected distance on a single tank of gas is 650 miles, as opposed to the other big v8, that can only go 80 total.
Imagine these two cars as older and newer technology in processors. The newer technology has greater potential for power saving simply because its baseline is already a fairly efficient platform. A small change will take it a greater distance. That car going 650 miles on one tank of gas… well, you’ll notice if that MPG drops by 3%... because you’ll be filling up at 600 miles.
In summary, if my phone is going to die after 2 hours of being off the charger anyways, because it has a small battery, or it’s display is chewing up 95% of the overall power draw, then yes, you are pretty much wasting your time playing with hardware settings otherwise. But that is not the case anymore. Mobile devices are the opposite – very efficient. Which means there is greater potential to minimize their power consumption by tuning, say, a CPU governor to not overreact to activity initiated by the user.
Again, very subjective statement made about 2%... It means nothing…. 2% could be 20 extra minutes of a phone call, 15 minutes of screen on time… etc. You see my point.

Touchscreen sample rate and jitter findings

Here's what I've found related to slow scrolling jitter and the touchscreen. When you first open an app, the very first couple slow scrolling swipes produce very smooth screen animation. It will then get jittery but if you exit the app, then reopen, the smoothness will return. Do this experiment in Contacts app to see what I mean.
Now I found this app called "Touch MultiTest" which reads out the touchscreen sample rate as you move your finger on the screen. When you first open it and do a swipe, you see smooth tracking and a solid sample rate reported greater than 120 Hz. However after a couple swipes the dot response becomes jittery and sample rate drops to something around 100 Hz. Closing and reopening the app gets you back to 120 Hz.
So I think this proves the hardware and software touch loop can produce smooth motion, and it's really sampling at 120 Hz. The big question is what exactly degrades after a couple swipes. In the best case it's some driver or software buffer / interrupt handling that degrades. In the worst case it's related to low level hardware issues. I'm hopeful it's software related. By the way somehow Chrome browser always scrolls smoothly with slow swipes. What is Chrome doing differently than all other apps? Just filtering?
Scrappy1 said:
Here's what I've found related to slow scrolling jitter and the touchscreen. When you first open an app, the very first couple slow scrolling swipes produce very smooth screen animation. It will then get jittery but if you exit the app, then reopen, the smoothness will return. Do this experiment in Contacts app to see what I mean.
Now I found this app called "Touch MultiTest" which reads out the touchscreen sample rate as you move your finger on the screen. When you first open it and do a swipe, you see smooth tracking and a solid sample rate reported greater than 120 Hz. However after a couple swipes the dot response becomes jittery and sample rate drops to something around 100 Hz. Closing and reopening the app gets you back to 120 Hz.
So I think this proves the hardware and software touch loop can produce smooth motion, and it's really sampling at 120 Hz. The big question is what exactly degrades after a couple swipes. In the best case it's some driver or software buffer / interrupt handling that degrades. In the worst case it's related to low level hardware issues. I'm hopeful it's software related. By the way somehow Chrome browser always scrolls smoothly with slow swipes. What is Chrome doing differently than all other apps? Just filtering?
Click to expand...
Click to collapse
Have you tried contacting Essential or possibly using their beta feedback form to tell them about your theory/findings?
Our screens sample at 60Hz. We already know this from the AMA's on Reddit. The test app you're using is inaccurate if it reads 120Hz or even 100Hz.
60Hz sampling in of itself shouldn't be a problem either since iPhones (except for the newest ones) sample at 60Hz and everyone knows how smooth they are.
Hopefully there's not some other hardware flaw and it's just Essential's software.
ChronoReverse said:
Our screens sample at 60Hz. We already know this from the AMA's on Reddit. The test app you're using is inaccurate if it reads 120Hz or even 100Hz.
60Hz sampling in of itself shouldn't be a problem either since iPhones (except for the newest ones) sample at 60Hz and everyone knows how smooth they are.
Hopefully there's not some other hardware flaw and it's just Essential's software.
Click to expand...
Click to collapse
I don't put much stock in the AMA response since its so vague and nonspecific and could be referring to screen refresh rate (60 Hz) either intentionally or accidentally.
If new iPads and iPhones sample at 120 Hz, it's entirely possible essential panel is sampling at 120 Hz.
Try using Touchscreen Benchmark to test and you'll be able to verify the actual samples per second. As a point of comparison, the Galaxy S4 samples at 90Hz and the Shield tablet does a whopping 180Hz!
In any case, it's easy to see that it's not refreshing at 100Hz or 120Hz simply by looking at the number of touch samples that actually appear on the screen. Try it on a faster phone and you can see the higher density of touch responses.
Furthermore, you can't reliably discern the sample rate in the first second so trusting the app saying it's 120Hz and dips to 100Hz is even less reliable than the AMA.
ChronoReverse said:
Try using Touchscreen Benchmark to test and you'll be able to verify the actual samples per second. As a point of comparison, the Galaxy S4 samples at 90Hz and the Shield tablet does a whopping 180Hz!
In any case, it's easy to see that it's not refreshing at 100Hz or 120Hz simply by looking at the number of touch samples that actually appear on the screen. Try it on a faster phone and you can see the higher density of touch responses.
Furthermore, you can't reliably discern the sample rate in the first second so trusting the app saying it's 120Hz and dips to 100Hz is even less reliable than the AMA.
Click to expand...
Click to collapse
I invite anyone to do my test and decide for themselves or measure and produce new data. That's what I'm going for here. Not regurgitation of bland statements.
Scrappy1 said:
I invite anyone to do my test and decide for themselves or measure and produce new data. That's what I'm going for here. Not regurgitation of bland statements.
Click to expand...
Click to collapse
I just invited you to use a different test instead of relying on one that doesn't spit out reasonable numbers.
Does it make more sense that the Essential potentially is using a 120Hz touchscreen which Essential won't confirm despite it being a feather in their caps (since even iPhones only got 120Hz recently) or does it make more sense that Essential is using a slower than average (for Android) panel which their software isn't filtering out as well as Apple's software does? Which is more likely to cause jitter and touch latency?
ChronoReverse said:
I just invited you to use a different test instead of relying on one that doesn't spit out reasonable numbers.
Does it make more sense that the Essential potentially is using a 120Hz touchscreen which Essential won't confirm despite it being a feather in their caps (since even iPhones only got 120Hz recently) or does it make more sense that Essential is using a slower than average (for Android) panel which their software isn't filtering out as well as Apple's software does? Which is more likely to cause jitter and touch latency?
Click to expand...
Click to collapse
It's actually that your misunderstanding terminology...
Your mistaking sample rate and refresh rate...
Refresh rate is how many times per second? the screen is redrawn...
Sample rate is how many times per second? the screen reads touches...
No way you can tell the difference between 120hz vs 100hz.
Sent from my PH-1 using Tapatalk
rignfool said:
It's actually that your misunderstanding terminology...
Your mistaking sample rate and refresh rate...
Refresh rate is how many times per second? the screen is redrawn...
Sample rate is how many times per second? the screen reads touches...
Click to expand...
Click to collapse
No, I'm referring to the touchscreen. Obviously the Essential LCD only refreshes at 60Hz (only the Razer and iPad Pro refreshes at 120Hz) but the touchscreen also samples at 60Hz which is common for lower end Androids (90Hz and 120Hz are the other common sampling rates found in Android devices).
The new iPhone X's OLED still refreshes at 60Hz but has a 120Hz sampling touchscreen which is higher than the 60Hz it used to be in other iOS devices (except for the iPad Pro). I also mentioned the Shield tablet sampling at 180Hz and there's no mobile device with a screen refresh that fast either.
LNJ said:
No way you can tell the difference between 120hz vs 100hz.
Click to expand...
Click to collapse
The drop to 100 Hz after a couple of seconds is "indicative of the problem", not that a 100 Hz rate would not be smooth in a properly designed device. Something comes unhinged at the point we see the drop to 100 Hz. Could be touch buffer / event que is not being serviced fast enough due to low level driver or hardware. Also could be some piece of software in critical path starts consuming more time than allowed, leading to non uniform response. Could be actual stuttering of hardware.
When you exit and then restart an app, the touch event pipleline is flushed, so things are fixed again for a couple of seconds.
YouTube app
Scrappy1 said:
Here's what I've found related to slow scrolling jitter and the touchscreen. When you first open an app, the very first couple slow scrolling swipes produce very smooth screen animation. It will then get jittery but if you exit the app, then reopen, the smoothness will return. Do this experiment in Contacts app to see what I mean.
Now I found this app called "Touch MultiTest" which reads out the touchscreen sample rate as you move your finger on the screen. When you first open it and do a swipe, you see smooth tracking and a solid sample rate reported greater than 120 Hz. However after a couple swipes the dot response becomes jittery and sample rate drops to something around 100 Hz. Closing and reopening the app gets you back to 120 Hz.
So I think this proves the hardware and software touch loop can produce smooth motion, and it's really sampling at 120 Hz. The big question is what exactly degrades after a couple swipes. In the best case it's some driver or software buffer / interrupt handling that degrades. In the worst case it's related to low level hardware issues. I'm hopeful it's software related. By the way somehow Chrome browser always scrolls smoothly with slow swipes. What is Chrome doing differently than all other apps? Just filtering?
Click to expand...
Click to collapse
I have noticed that if you launch the camera and then open the YouTube app or whatever you're using where you can see the touch scrolling jitters, the touch scrolling is nice and smooth. Then after some time it comes back. The touch scrolling in Chrome is perfect and I wish it was the same everywhere. For some reason the YouTube app performs the worst for me. Chrome must have received an update a while back since I used to get bad touch scrolling on that too. The thing that worries me is some claim touch scrolling is perfectly smooth on their device. Hopefully that's a case of them not noticing it and not a case of actual hardware differences.
mhajii210 said:
I have noticed that if you launch the camera and then open the YouTube app or whatever you're using where you can see the touch scrolling jitters, the touch scrolling is nice and smooth. Then after some time it comes back. The touch scrolling in Chrome is perfect and I wish it was the same everywhere. For some reason the YouTube app performs the worst for me. Chrome must have received an update a while back since I used to get bad touch scrolling on that too. The thing that worries me is some claim touch scrolling is perfectly smooth on their device. Hopefully that's a case of them not noticing it and not a case of actual hardware differences.
Click to expand...
Click to collapse
Cool tip! I hadn't noticed that. Opening camera then switching to contacts had me scrolling smooth for many minutes. However after a few rounds of tests it lost the magic. I could no longer use camera open first to produce the smooth scrolling. So there are several factors at play here and this could use more investigation. Most of all though this gives me hope the issue can be totally fixed in software.
I'm starting to think the thing that goes bad and causes choppiness is the rendering pipeline. I enabled "Profile GPU Rendering" and then did a screen capture after scrolling my battery stats in settings for both 1) good condition just after launching settings when scrolling is smooth and 2) bad condition that kicks in after a few seconds when things get choppy. The bad condition shows vastly inflated rendering time which blows the 60 FPS (green line) budget. The largest increase is in red (command issue), but EVERYTHING is inflated in the bad condition. What could cause this?
The captures of the good and bad conditions are attached.
Turns out the reason the rendering pipeline starts taking so long is due to the application thread moving from high performance CPU cluster to the low performance CPU cluster. Using the paid version of System Monitor I opened a floating window of CPU load and freq. I then again opened battery settings and scrolled around in the good and bad state. I can see the CPU load is on the high performance cluster right away (5-8) and those guys are running at 2.4 GHz. Hence everything is smooth. When the jitters set in, the load has moved to low performance cluster (1-4) and they are running much lower clock rate < 1 GHz. I do believe this is probably fairly normal android behavior, but it's obviously tied to the slow scrolling jitters for us. It could be a subtle governor or big.LITTLE thread scheduling issue somehow playing into touch screen weirdness I suppose.
The two captures attached show the issue. One was captured right after launching battery settings when things are smooth and CPUs 5-8 are screaming. Other was captured after things went jittery, and here you can see CPU load that was on 5-8 has moved to 1-4, and clock frequency is much lower. (Hovers between 300 - 1000 Mhz)
Scrappy1 said:
Turns out the reason the rendering pipeline starts taking so long is due to the application thread moving from high performance CPU cluster to the low performance CPU cluster. Using the paid version of System Monitor I opened a floating window of CPU load and freq. I then again opened battery settings and scrolled around in the good and bad state. I can see the CPU load is on the high performance cluster right away (5-8) and those guys are running at 2.4 GHz. Hence everything is smooth. When the jitters set in, the load has moved to low performance cluster (1-4) and they are running much lower clock rate < 1 GHz. I do believe this is probably fairly normal android behavior, but it's obviously tied to the slow scrolling jitters for us. It could be a subtle governor or big.LITTLE thread scheduling issue somehow playing into touch screen weirdness I suppose.
The two captures attached show the issue. One was captured right after launching battery settings when things are smooth and CPUs 5-8 are screaming. Other was captured after things went jittery, and here you can see CPU load that was on 5-8 has moved to 1-4, and clock frequency is much lower. (Hovers between 300 - 1000 Mhz)
Click to expand...
Click to collapse
Let's try this
@DespairFactor
GPU governor
rignfool said:
Let's try this
@DespairFactor
Click to expand...
Click to collapse
Well I can tell you it's not all because of the CPU performance since setting GPU governor to performance on Oreo beta 2 completely gets rid of the touch screen jitters for me. I'm running Oreo beta 2, Rey.R3 Kernel and Magisk 15.2. Using EX Kernel Manager to set GPU governor to performance, I have eliminated the touch scrolling microstutters. Try it out for yourself and see! I also set CPU governor to conservative to compensate for the slightly increased battery usage. Phone is blazing now. https://forum.xda-developers.com/essential-phone/development/kernel-rey-kernel-t3723601 is the link to the kernel.
mhajii210 said:
Well I can tell you it's not all because of the CPU performance since setting GPU governor to performance on Oreo beta 2 completely gets rid of the touch screen jitters for me. I'm running Oreo beta 2, Rey.R3 Kernel and Magisk 15.2. Using EX Kernel Manager to set GPU governor to performance, I have eliminated the touch scrolling microstutters. Try it out for yourself and see! I also set CPU governor to conservative to compensate for the slightly increased battery usage. Phone is blazing now. https://forum.xda-developers.com/essential-phone/development/kernel-rey-kernel-t3723601 is the link to the kernel.
Click to expand...
Click to collapse
Thanks for your input! I would go down the root and tweaks path if I didn't have to use my phone for work with the Google device policy and all. Hoping for some jitter improvement in next official stock update.
rignfool said:
Let's try this
@DespairFactor
Click to expand...
Click to collapse
I think we can move the touchscreen to it's own workqueue, but not sure if it'll handle this.
mhajii210 said:
Well I can tell you it's not all because of the CPU performance since setting GPU governor to performance on Oreo beta 2 completely gets rid of the touch screen jitters for me. I'm running Oreo beta 2, Rey.R3 Kernel and Magisk 15.2. Using EX Kernel Manager to set GPU governor to performance, I have eliminated the touch scrolling microstutters. Try it out for yourself and see! I also set CPU governor to conservative to compensate for the slightly increased battery usage. Phone is blazing now. https://forum.xda-developers.com/essential-phone/development/kernel-rey-kernel-t3723601 is the link to the kernel.
Click to expand...
Click to collapse
Post a video. In all likelihood, it's just placebo effect. I've heard time and time again people claiming that that the slow-scrolling stutter is gone. It's never once been proven. Here's a side-by-side comparison vs the Pixel XL.

Question Configuration for CODM, optimization, applications, some problems

I have the version with Snapdragon, the first thing I did was "root" it and install these apps
+Package disabler Pro (To disable all GOS services, also disable all Bixby services since I don't use them)
+FDE.AI (To increase performance, highly recommended, although to force high hertz on the screen is not compatible and gives errors)
*GLTOOLS (To force 90 fps in CODM, although here I have problems with FPS losses and heating, I have an external fan cooler to avoid heating as much as possible, even so I still have low fps jerks)
+KILLAPPS PRO (To close any application that runs in the background, also those of the system, although at the same time they reactivate themselves)
I also lower the resolution to FHD+ and put the colors in natural, which by my logic I understand that I should force the processor less and heat less, the latter if it worked for me the temperature remains stable and the battery lasts much longer.
I have 15 days with the phone and in my little experience with it, I notice that it is not a cell phone to play for many hours CODM without external ventilation, since it gets very hot and you lose a lot of performance.
In general, I expected more from him, I thought that by forcing 90 fps and having external ventilation and lowering resolutions I was going to have a high and constant performance in CODM, which disappointed me.
Has anyone had other configurations so that the game is stable and without crashes?
CoDM just sucks on this phone. My OnePlus 8 Pro with a Snapdragon 865 was at least playable. I'm getting a lot of freezing even when not at mac settings

Categories

Resources