I did an experiment with some interesting results. It started out as my beginner's attempt to compare two kernels.
It evolved into providing insight (I think) regarding up/down threshhold parameters for the "conservative" cpu governor
If you dont’ want to read the whole thing, jump to the conclusion section posted at the end.
Phone configuration – installed qkster’s UCLB3 with AT&T bloatware removed, added custom kernel, rooted.
To remove variable cpu loads, I turned wifi/data off and turned off the continuously-running programs that I have installed myself (Power Tutor and Tasker).
To create a steady cpu load, I started the program “relax and sleep” (calm background noise program, available for free). I checked one audio channel in the program, and pushed back button to place program in the background, still creating noise. (I think relax and sleep is a good choice of program for cpu testing in general because it allows to check variable number of channels which does put variable cpu load.. although in this case I only used only one channel.. and note that you cannot recreate this experiment if you use mp3 music app instead, because it uses much less cpu than one relax and sleep channel)
Then I started setcpu for monitoring and experimentation. Repeated with several different kernels.
Results with Entropy’s daily driver kernel.
I set the test setup in setcpu to conservative governor, Fmax = 1200, Fmin = 100, i/o scheduler = noop.
The cpu frequency in Mhz now has the following pattern:
800, 1000,800, 1000, repeat ... (frequency changing approx once per second)
Results with with Zen Infusion-Z A/1600 kernel.
I set the test setup in setcpu to conservative governor, Fmax = 1200, Fmin = 100, i/o scheduler = noop (same as before, intending to compare the performance of the kernels).
The cpu frequency in Mhz now has the following pattern:
1200 (constant).
ok. On the surface one might conclude Entropy’s kernel is somehow handling the load better without ratcheting up the frequency. But the story gets more interesting than that:..
Next I tried Zen’s same kernel Infusion Z/1600 with everything the same except change the cpu governor from “conservative” to “on-demand “
The cpu frequency in Mhz now has the following pattern:
200, 400, 200, 400, 200, 400, 200, 400, 200, 400, 200, 400, 200, 400, 200, 400, 1200 repeat
(changing about once per second, mostly 200, 400, pops into 1200 only very infrequently).
But wait! The "conservative" governor is supposed to be better on the battery than the on-demand governor, and yet for the exact same conditions, we’re gettings higher cpu frequency (1200 constant) with conservative than with on-demand (mostly 200/400 with occasionaly 1200). It’s the exact opposite of how it's supposed to be. Surprising, don’t you think!!???!!
So now let’s look at some other governor settings that don't seem to get much attention.
Go to “governor” page of setcpu with “conservative” selected on the main page. The following values appear repeatably after kernel installation for each kernel, so I am ASSUMING these are default values provided in the kernels themselves (open to comment if I have somehow come to the wrong conclusion)
For Zen’s Infusion-Z (A or B, 1600 or 1400)
up threshold = 80
down threshold =20
(also freq step = 5, sampling rate = 78124 although I don’t think these are important for this post)
For Entropy’s DD
up threshold = 50
down threshold =35
(also freq step = 20, sampling rate = 40000 although I don’t think these are important for this post)
(both kernels have sampling down factor = 1, ignore nice load = 0).
I think we can explain my "experimental" results by examining the above up and down thresholds and making some assumptions about the nature of the load (my assumptions are admittedly contrived in attempt to explain these observations, but they seem reasonable to me).
I ASSUME the steady cpu load I have created in my steup varies in the range 350-400 Mhz quasi-steady state (not perfectly constant due to other processes jumping up in the background).
I ASSUME that before the steady cpu load is reached, there is a temporary increase in cpu loading to 700Mhz or more associated with me flipping screens around to get from the relax and sleep appliation to the setcpu application. Within several seconds, this temporary increase will be gone and only the quasi-steady portion 350-400 Mhz remains.
First look at performance of Zen's Infusion-Z A/1600 while in conservative with default settings in the above experiment. That initial spike of 700Mhz load was enough to get us above the up-threshhold of the 800M-hz level (80%*800Mhz = 640Mhz) and push us to 1200M-hz (1200 comes after 800 in progresion for Zen A, which has no 1000). Once we got to 1,200Mhz, we are NEVER going to get down from there until we reach a load corresponding to the down-threshhold of that level which is 240Mhz (20%*1200Mhz = 240Mhz). And with my relax and sleep application running at 350-400Mhz, it won't happen. That is quite a depressing thing to think – I could put my relax and sleep on for an hour as backgorund noise, and my cpu would be buzzing at 1,200Mhz even though the load is only 350-400Mhz.
This seems very undesriable for battery life.
Now lets look at performance of Entropy’s daily driver in conservative/default setings in the above experiment. The postulated 350-400Mhz cpu load occasionally exceeds the up threshhold of 800Mhz (50%*800Mhz = 400) and once in 1000Mhz occasionally drops below dropout setting of the 1000M-hz (35%*1000Mhz = 350). (And now you know why I postulated 350-400). I have two comments about these Entropy results. The first is minor/tangential, the second more important.
1 - First comment (minor/detour) has to do with cycling between different cpu frequencies which is created by the governor (not the load). I don't think it's any problem at all, but this type of cycling is more likely to occur when the diferences between adjacent frequencies are large. For example let’s say the cpu load was rock solid pure steady state (not varying) at 250Mhz. The up threshhold for 400Mhz setting is 200 (50%*200) while the down threshhold for 800Mhz is 280 (35%*800). So we have postulated a situation where the cpu demand is pure steady state (250Mhz), yet the governor will never find a steady state solution... if it’s in 400Mhz it wants to upshift and if it’s in 800Mhz it wants to downshift. Again I don’t think it’s a problem (it's probably ok to let the two frequencies time-share back and forth) but there is a strategy to avoid it if we want to avoid it, as follows. Considering the highest possible ratio between adjacent frequencies (for these kernels) is 2.0, then we should set things so the ratio of (UpThreshold / DownThreshold) > 2.0 in order to avoid this cycling (which is probably not a problem, more later)
2 – Second comment is more important because it relates to battery usage (as I percieve it). Original postulated load that explains this experiment results is 350Mhz-400Mhz. Yet the cpu is running at 800-1000Mhz. Twice as high. That’s wasting some battery I think.
To summarize results so far, it seems to me that Zen’s kernel default thresholds have potential to waste battery due to low down threshhold (20%), which can keep it at high CPU rate forever, even though the load has decreased substantially. In theory we could be setting the cpu almost 5 times as it needs to be in the situation where steady load decreases to just above 20% of the higher level.. The Entropy’s kernel default threshhold have potential to waste battery due to the low up threshold (50%). In theory we can be setting the cpu almost twice as fast as it needs to be in the situation where the steady load increases to just above 50% of the lower level. Entropy's kernel defaults also create the potential for continuos cycling between frequencies even in the presence a of perfectly steady cpu load, since ratio up/down is less than 2 (I don’t think that's a problem, the only reason I mention cycling is because it feeds into my strategy for selecting down threshold - see below).
So what settings should we use for up/down thresholds? Actually I haven't done my complete due diligence in searching before posting this thread, if someone has a good link with recommendations and/or discussion on this subject I'd be interested. I have seen the generic xda thread on governors and I don't think it was covered there in terms of specific recommended values. Here's my thought process fwiw:. Higher is better for battery on both numbers (at some possible expense of performance). I remember seeing on other sites a default up threshhold of 95% (listed, but not discussed). That makes sense to me for battery saving... shift up at the last minute. Perhaps this high value gives small slowing of response to demand increase, but I don’t think it’s much slower (especially for rapid cpu load increase which is the most critical for response...rapid increase means short time to get from 50% to 95%... short time means not a big response penalty difference) and certainly it seems worthwhile to try to strive for an efficient operating point in long-term steady state. Additionally, we're talking about the "conservative" governor which is supposed to favor battery (we can set up a setcpu profile to invoke on-demand or interactiveX in situations when we want more responsiveness and don't care as much about battery, at least these are availble in Zen's). I don’t recall seeing any number for down threshhold, but should be high as possible again to save battery. How high? I don't know. The only way to put a limit I can think of it is to impose an arbitrary (maybe unnecessary) requirement that we don't want any cycling in pure steady state as discussed above. This means we need down threshhold at least a factor of 2 below up. So I pick 95/2, rounded down to nearest round nubmer of 45%. There may be further improvements if we drop that requirement to avoid cycling and allow even higher down-threshhold, but at least we know the down threshhold of 45% would seem to have moved in the right direction for battery from the defaults. So up/down 95/45 is my pick for now.
Using conservative governor with 95% up threshold and 45% down threshhold (still noop i/o) in above conditions on Zen’s kernel, I’m seeing frequency pattern
400, 400, 400, 400, 400, 400, 400, 400, 800, repeat
in other words mostly 400, intermittent jump to 800.
Certainly the up/down 95/45 settings for conservative governor perform better batterywise than the default settings for conservative governor given in both kernels for this one experiment. To me, it seems very reasonable to expect it to also be better batterywise accross a wide range of expected operations, but it's open to comment.
Small detour - why did we do better batterywise on Zen's on-demand default settings than on Zen's conservative default settings for this particular cpu loading?. The settings for Zen's on-demand default include an up threshhold of 95% and no down threshhold. So on-demand governor apparenlty finds some other way to shift down. Since the 20% down threshold that was causing the problem in Zen's conservative governor default settings is not present in the on-demand governor... that probably explains why the on-demand didn't get hung up at the higher level and performed better afterwards. Another thing to note, if 95% up threshhold is responsive enough for the on-demand, it should surely be responsive enough for the for the conservative...supports the previous sugestion to increase conservative up threshhold to 95%.
CONCLUSIONS:
There is only thing in this entire thread that I am completely 100% positive about, and it is that Zen and Entropy know lightyears more about this stuff than me. In fact, that is the very reason that I was extremely careful to record the as-found default settings in order to preserve any intelligence that went into those defaults before I started tweaking.
So I can reach one of two conclusions:
#1 – I am completely misunderstanding how this conservative cpu governor works
or...?
#2 – The developers never intended for the “default” values to be used, instead they envisioned the users would adjust them as needed.
In the event that #2 were correct, then it would seem logical for battery-concious users to tweak these up/down threshhold settings of the conservative governor. My thoughts would be to set them to 95/45 by the logic above.... may or may not be considering all relevant factors. I'm open to thoughts and comments....
In the above analysis, I have assumed that power consumed by the cpu can be predicted from the cpu frequency (for a given voltage setting of course).
I now believe that assumption might be incorrect.
The reason I believe it is false is a result of another experiment I just did.
I set the cpu governor to performance to maintain constant 1200Mhz.
Then I looked at the cpu power usage trace in "Power Tutor" program.
I expected to see power attributed to cpu usage as constant, but it was varying up and down.
And by moving the homescreens around, I could create a dramatic and predictable increase in power usage of cpu (as indicated on Power Tutor).
All of this change in power consumption of the cpu occurred while the cpu governor was in performance mode with cpu frequency constant at1200Mhz.
I didn't expect that. I can't really explain it (can anyone else?). But clearly there is more to the story than I thought (assuming that the cpu power usage reported by the Power Tutor is correct, which I'm not sure of either).
To evaluate the battery friendliness of variouis governor settings, it might be more useful to watch the power tutor results when performing the above experiments, instead of just watching the cpu frequency as I did before.
Over my head...
How does the battery cycle pan out?
It would be nice if you or someone else had a spare phone to test this battery consumption theory.
I would wonder if the report of consumption is also correct.
The bottom line that I would be interested in seeing is how long can the phone, running a certain kernel and governor, last.
For example: Charge to 100%. Take off charger. Wifi off, Cell data off and in Airplane mode (removing signal variable).
Then run kernel with governor - record the battery duration.
e.pete - nice work! I appreciate your empirical approach to this topic.
I can add the following: you have described accurately how the conservative governor works. For OnDemand, the governor behavior results in the CPU at max under load and minimum when idle, with a smaller amount of time being spent at the steps in between based on thresholds. On my phone today with OnDemand, for example, I'm at 1600mhz 6% of the time, 100mhz 5% of the time, and 800mhz 2% of the time. Deep sleep is 83% and the other CPU frequencies are all below 0.5%.
In general use, conservative should be kinder to the battery and to the hardware. The recommendations I have seen for Linux platforms is to use conservative where battery life matters and ondemand when there is a constant external source of power (i.e. PC or server). Of course, actual use determines how the governor performs. Most smartphones have a lot going on even when the screen is off. A good indicator of average CPU use over the course of a day is CPUSpy. This app, combined with a decent battery monitor can help tell the story from a macro/whole system perspective over time. On the "be kind to hardware" topic, conservative should increment and decrement to adjacent frequencies based on load. This behavior might be happening too fast for SetCPU or other realtime monitor to capture..that's where CPUSpy can show what is happening over a larger period of time. These more gradual transitions may result in less wear and tear on the phone hardware, but I have not seen any significant evidence that this is a factor in the usual life span of a smartphone. (On the flip side, setting the governor to performance and OC to max setting...that is NOT recommended and could harm the phone).
That said, the two kernels you tested have the following default characteristics:
Entropy DD - Conservative/BFQ - Optimized for stability and battery life
Infusion (Bedwa/Zen) - OnDemand/CFQ - Optimized for performance
The Infusion kernels do not include optimization settings for conservative. As you surmised, the expectation is that if you are going to change these settings you have some idea of what you are aiming for and will adjust accordingly.
If battery life is your aim, I've found that the best savings are realized in optimizing transition to sleep when the phone is not being used, minimizing the number of apps that attempt to keep the phone awake, and being selective in your use of wifi and data radios (although too much mucking around with this last option can lead to triggering some of the known bugs in these kernels which manifest as a higher than normal Android OS or Dialer/RILD drains - as seen on the standard battery usage screen in Settings).
On this last topic, there's another thread (which I see you have visited) which covers discussion and work on these known battery drain anomalies: http://forum.xda-developers.com/showthread.php?t=1408433
Here is some additional information on governors courtesy of Big Blue... http://publib.boulder.ibm.com/infoc...?topic=/liaai/cpufreq/TheOndemandGovernor.htm
And a bit more info ... https://wiki.archlinux.org/index.php/CPU_Frequency_Scaling
Truckerglenn said:
Over my head...
Click to expand...
Click to collapse
:what: I'm so glad there are people much smarter then myself here. Great work electricpete :thumbup: even if I followed about half of it
Sent from a de-FUNKt Infuse
Pete - Here's a link to a thread that has a lot of information about governors, i/o schedulers, tweaks, scripts, and kernel objects:
http://forum.xda-developers.com/showthread.php?t=1369817
Thanks everyone, a lot of good info.
Especially Zen, very useful info and links.
It was a very interesting comment about certain governor strategies being hardware unfriendly if they jump increase the cpu from min to max.
I never realized that was a factor (only thought the max speed was important).
But it definitely sounds plausible. Maybe (?) the rapid temperature increase causes uneven temperature (cpu gets hot before attached plate gets hot) and therefore uneven thermal expansion, which causes mechanical stresses. I can imagine there are other subtle aspect of the cycling up/down that can be important. If you have any more info or links on the effects of cpu governor strategy upon hardware life readily available, I'd be very interested to hear it. (If not readily available, that's fine too, I will do some googling).
I did find this link which suggested it's better for the hardware life to run at 100% (I guess for us that's 1200Mhz) than it is to cycle up/down. It's not written about phones but about PCs. There might be some differences in the technical aspects. There are of course big differences in the priorities for PCs... they don't care about power usage as much as phones do, and pc users probably expect a longer life than phone users do.
http://www.overclockers.com/overclockings-impact-on-cpu-life/
Thanks again.
I will report back if I get some free time to continue experimenting... maybe this weekend.
Primary consideration (as you've noted) with smartphones is battery conservation. ARM processors are engineered to operate at multiple frequency steps, and to turn off where possible. Without this capability phones would need a much higher capacity battery. As for PCs, current processors include frequency stepping technology to reduce power consumption and heat, and perhaps extend life by keeping temps lower.
The main conclusion of the article you referenced (which is 13 years old, btw, but does contain a wealth of good foundational information) is that heat is the primary enemy. This is a major factor with smartphones as they have limited ability to dissipate heat. An Infuse running at 1200mhz (or 1600mhz OCed) confined to a purse or pocket, or (as reported in these forums a while back) under a pillow gets hot very quickly. This will lead to conditions that will harm the phone. At one time, my phone had an error condition causing the Dialer to go crazy (rild process) and peg the CPU at 100% for an extended period of time, while the phone was also plugged into a charger (thus heat from the charging process too). The end result of this was a temperature that tripped a heat sensor threshold, causing the phone to shut itself off. So there are, at least, limited protections against extreme events.
As I noted above, I've not seen any evidence that the normal (or even OCed) frequency stepping that occurs with smartphones leads to failures within the normal in service period for these devices - 2 to 4 years in most cases. Running at 100% all the time may put your phone's health at risk and will definitely impair your battery life.
Zen - Good points. One thing I do take away from the article (along with your comments) is the cuumulative effect of cycling, So when I settle on up/down threshholds, I may try to avoid putting them too close together in order to avoid extra cycling (keep Max/Min threshhold ratio >2). Although I do realize these particular cycles between two adjacent frequencies are not as bad as bad as the cycle between min and max frequency.
more test results
I have completed some testing using PowerTutor and results reported in attached spreadsheet.
I would say the results only muddy things further. Don’t read any further if you don’t have a tolerance for ambiguity.
SETUP (common to all tests)
In all tests, I had similar setup as in the original post: 1 channel of “relax and sleep” running to create a constant cpu load, and all other continuous-run programs turned off except power tutor.
Some other details common to all tests: Tasker off, Wifi off, data off, Power Tutor on
No uv
Noop I/o governor used throughout.
WHAT CHANGES BETWEEN TESTS:
See the spreadsheet, tab labeled “summary”.
The things that changed between tests are in rows 3 thru 8, labeled “Tested Configuration”
As you can see, between tests, I varied the Up threshhold and Down threshhold. I varied the kernel. I varied the governor (mostly conservative, but performance used).
WHAT WAS RECORDED DURING EACH TEST:
1 – Recorded power usage of CPU, LCD, Audio as reported in Power Tutor over the course of one minute. I converted them to battery %/hr (conversions shown in tab “Notes”) and listed them in rows 12-14
2 – Recorded the actual cpu frequencies seen in setcpu “home” screen, similar to original post and listed these in row 15. I attempted to guess the average frequency over time and put this in row 16.
3 Rows 18-23 are the Quadrant results for the six categories that Quadrant reports (yes, I know people don’t like Quadrant, just recorded as a datapoint)
WHAT PATTERNs EMERGE:
1 – Entropy’s DD and Zen’s Infusion-A use comparable power (as reported by Power Tutor) in this particular experiment.
2 – Zen’s Infusion does better on quadrant score in this particular experiment, when both are set at the same governor configuration (100-1200, conservative), . Not surprising since Zen said he has optimized for performance.
3 – How does the governor frequency affect power usage? This is the muddy part. There is no doubt that if we blindly take the data at face value, then there IS a correlation between cpu frequency and power attributed to the cpu by Power Tutor. However the correlation that emerges from the data is in the opposite direction from what anyone in the world would think: this data suggests that increasing CPU frequency causes decrease in power consumption reported by power tutor. See tab labeled “chart 1” for graphical depiction of this result.
As you can see in the graph, there is not a random spread of results (as would be the case if random unacconted-for errors were at work). There is a definite correlation. What it suggests perhaps is that there can be a systematic error in the way PowerTutor measures power that depends on cpu frequency.... in other words the error itself (between measured and actual) somehow depends on cpu frequency.
So, I am just reporting some results. I am definitely not suggesting anyone overclock to save power (that would be truly bizarre and I’d probably be kicked out of xda for suggesting something so silly).
On the other hand, as stated in the 2nd post of this thread, I’m still very leery of using cpu frequency as an indicator of power that cpu is drawing... because there is just too much going on inside that black box that I don’t know about. For one thing, the cpu itself may draw different amounts of power at a given frequency depending on it’s loading because the registers may not be doing anything at low loads. For another thing, there are a lot of other things in the phone (like RAM, bus) that may draw some power but probably get lumped in with the reported cpu power in Power Tutor and others. Perhaps the cpu is somehow more efficient at interfacing with these others parts of the system when cpu is at high speed, enabling it to reduce the power they draw. The point is, it’s a lot more complicated then I assumed in my first post.
I have heard Entropy mention that before changing kernels we should always reset UV settings (and reboot) and reset other cpu related settings (Fmin and Fmax I assume).
I would like to add another item: always uninstall setcpu before changing kernels and reinstall it after you change.
The reason: I have seen some very weird results of setcpu when I left it installed in between swapping kernels. Like for example cpu running at 1600Mhz even though Fmax is 1200Mhz and there are no profiles allowing 1600. Those weird results are not included in the above data (I observed the frequencies during each trial as reported in the spreadsheet).
Power Tutor has a great interface and very detailed stats available. Seems to have great credentials based on their website.
But I can only conclude we can’t trust it for our particular phone because of results above (power draw goes down as cpu speed goes up) and some other results I have seen (it seems to suggest that power used by my display does not change depending on dark/light backround, and also seems to suggest that power used by the phone does not change when I change the volume of music playing).
So, I’m looking for another way to be able to track power usage closely.
I kind of like qkster’s idea to just watch the battery go down.
I’d like to try to automate that using Tasker. I can write a program which will help me build a log of power usage.
The interface will be:
push a start icon and it prompts me to enter description of conditions that will be tested
wait some period of time (this is the constant-load test period that we're evaluating...may be listening to mp3)
push a stop icon and it prompts me for comments about anything that happened during the test.
At time of pressing the start icon, it will also record from the system:
1 - clock time (in seconds)
2 - voltage in millivolts to 4 digits of resolution (like 3784 millivolts)
3 - battery life remaining in percent to 2 digits of resolution (like 43%)
The same info will be recorded from the system upon pressing the stop icon.
All this info will be appended to a logfile and we can compute drain based on change in battery divided by change in time.
I can get these voltage and % life stats using the method suggested by Brandall’s tutorial here:
http://tasker.wikidot.com/using-linux-shell-with-tasker-for-a-technical-battery-widget
I couldn’t get the grep command to work, but I can still extract the required voltage and percent-life-remaining from the Battery sysdump using the tasker variable splitter command (I’ve already got that part programmed).
No-load voltage has a roughly known relation to battery life, but there’s also the matter of voltage drop accros the internal battery impedance that varies with load at the time of the measurement, so we don’t see no-load voltage, we see something lower which makes the whole thing somewhat variable.
Percent Remaining is the exact thing I want. But it is only given rounded to two digits, (43%). If I wanted to do a trial run listening to a 5-minute MP3 draining something like 12% per hour, the battery drop during that that 5 minutes will be only around 1%... the difference between two kernels or cpu frequency settings will be only a very small fraction of that 1%, so comparing the start and stop values which are both rounded to 1% would introduce an enormous error compared to the thing we're interested in. I can surely reduce that error by working with longer times, but that starts to become a PITA. That may end up being the only solution, but if there's any way to avoid it I'd like to be able to gather data in shorter chunks.
Which leads me to a QUESTION:
Does anyone know whether there is any way to retrieve or estimate “battery % remaining” with greater resolution that two digits (ie 43.26% instead of just 43%)?
unintended consequences from changing up threshhold
I used the following setup for almost a month:
Zen’s Infusion A Kernel (with my stock GB), conservative governor
UpThreshold = 95
DownThreshhold = 45
Only twice during the month, I saw the following:
Received a phone call. I could see the name of the caller. I couldn’t hear the caller. When I finally got hold of them later, they told me they could hear me even though I couldn't hear them.
That was very tough to figure out because it only occurred on two out of probably 30 or 40 phone calls received in a month.
The two phone calls did originate from cell phones in the same geographic area (near my work, an hour away from my home).
Then I had a breakthrough when I set up my work voicemail to automatically call my Android phone. Almost every time it called, the problem appeared (I couldn’t hear the robot voice telling me I had a message).
I kept leaving myself messages to reproduce the problem and narrow it down.
I found out it only occurs when my phone is asleep at time of the call (doesn’t occur if phone is awake at time of the call).
I removed my UV and problem continued.
I adjusted my governor and could make problem go away.
I narrowed it down to the up threshhold.
Repeatable with 95/45 up/down, the problem occurs.
Repeatably with 80/45 up/down, the problem does not occur.
I have gone back and forth between those two settings at least four times and each time it confirms the symptom is directly related to the governor setting.
Exactly why that is I’m not sure. Maybe the cpu is to slow to wake up to handle the call? Sounds kind of hokey, but I guess it doens’t really matter.
The bottom line for me: 80/45 is a great place for me to stay. Eliminates the “can’t hear caller” problem and still does pretty good at preventing cpu from going to high frequency when listening to my relax and sleep program for long periods of time.
If anyone has gone to 95/45 based on my recommendation, you might rethink it, especially if you see unusual behavior.
Related
Hi devs
I found something interesting about Android power management..maybe it will help us
http://developer.android.com/reference/android/os/PowerManager.html
http://www.netmite.com/android/mydr...s/power_management.html#androidPowerWakeLocks
and here is a app for users http://forum.xda-developers.com/showthread.php?t=1179809
I found some more things for power management
devs check pls
Enabling system for hitting OFF
#echo 1 > /debug/pm_debug/enable_off_mode
By default sleep_while_idle is set to false and enable_off_mode is set to true
CPU Dynamic Voltage Frequency Scaling settings
Enabling ondemand frequency governor
The ondemand governor enables DVFS(frequency/OPP) transitions based on CPU load.
#echo ondemand > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
Enabling performance frequency governor
The performance governor keeps the CPU always at the highest frequency.
#echo performance > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
Enabling powersave frequency governor
The powersave governor keeps the CPU always at the lowest frequency.
#echo powersave > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
Enabling userspace frequency governor
Once this governor is enabled, DVFS( frequency) transitions will be manually triggered by a userspace application by using the CPUfreq sysfs interface
#echo userspace > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
See all the available operating points
#cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_available_frequencies
Application can select any of the available frequency from the above
#echo <Desired Frequancy> > /sys/devices/system/cpu/cpu0/cpufreq/ scaling_setspeed
Checking CPU IDLE states usage
There are seven power states introduced by CPU Idle
The usage and time count for these different states can be checked via
#cat /sys/devices/system/cpu/cpu0/cpuidle/state*/time
#cat /sys/devices/system/cpu/cpu0/cpuidle/state*/usage
Enabling system for hitting OFF
#echo 1 > /debug/pm_debug/enable_off_mode
By default sleep_while_idle is set to false and enable_off_mode is set to true
CPU Dynamic Voltage Frequency Scaling settings
Enabling ondemand frequency governor
The ondemand governor enables DVFS(frequency/OPP) transitions based on CPU load.
#echo ondemand > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
Enabling performance frequency governor
The performance governor keeps the CPU always at the highest frequency.
#echo performance > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
Enabling powersave frequency governor
The powersave governor keeps the CPU always at the lowest frequency.
#echo powersave > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
Enabling userspace frequency governor
Once this governor is enabled, DVFS( frequency) transitions will be manually triggered by a userspace application by using the CPUfreq sysfs interface
#echo userspace > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
See all the available operating points
#cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_available_frequencies
Application can select any of the available frequency from the above
#echo <Desired Frequancy> > /sys/devices/system/cpu/cpu0/cpufreq/ scaling_setspeed
Checking CPU IDLE states usage
There are seven power states introduced by CPU Idle
The usage and time count for these different states can be checked via
#cat /sys/devices/system/cpu/cpu0/cpuidle/state*/time
#cat /sys/devices/system/cpu/cpu0/cpuidle/state*/usage
source: http://processors.wiki.ti.com/index.php/Android_Devkit_Power_Management_Porting_Guide
this is very interesting also:
Saving battery time for mobile devices has been a goal for the industry for many years. With the
advent of smartphones, reduction of energy consumption is even more important since they
consume a lot more energy than the generation of mobile phones before them. Consumers are
demanding longer battery life and greener electronics. One way to meet these demands is to
reduce energy consumption.
In order to make the mobile operating system utilize the Central Processing Unit (CPU) more
efficiently, applications should have different reservations based on how much they need to use
the CPU. A challenge the industry is facing is its lack of knowledge of the behavior of third
party applications. Especially since they are an increasing portion of the applications run on
smartphones. Without knowledge of how third party applications behave, it is hard to make
good reservations for them. If there was a way to dynamically make reservations for the
applications with adequate performance while they are running, the system could use this
information to reduce battery consumption by e.g. clocking down the CPU when a high clock
frequency is not needed. In this master thesis project, an open source resource manager called
ACTORS Resource Manager (ACTORS RM) [5][6] for desktop Linux [57] is ported to the
Android [37] operating system. The resource manager is also optimized for the applications
being run there. A power management patch to the Linux kernel was also used to get greater
control over the CPU’s frequency changes.
source: https://rapidshare.com/files/3398178110/Resource_reservation_and_power_management_in_Android.pdf
let's spy on HD2 kernel ?
feature:
AB: Audio Boost
AXI: AXI frequency tweak
BFQ: BFQ IO scheduler (default CFS)
BFS: BFS cpu scheduler (default CFS)
HAVS: Hybrid Adaptive Voltage Scaling (Static Voltage Scaling - SVS is default)
OC: OverClock
UV: UnderVolt
OC, UV and AXI features are the standard feature for EVO based kernel.
EBAT: Extended battery
http://forum.xda-developers.com/showthread.php?t=777921
Edit: after some more research i found out that we are in BIG $h|t,until the f**** HTC will unlock the bootloader and/or update Radio for us
What REALLY improves Android battery life on the HD2
So after all that rambling, the answer is: radio ROM version. When I installed Android, I installed the latest radio ROM available at the time (still the latest I think); i.e. 2.15.50.14, from http://forum.xda-developers.com/showthread.php?t=611787. After pulling my hair out trying all the above, I flashed the radio ROM with 2.12.50.02_2, and as if by magic, current draw under similar conditions to above is about 7mW; i.e. 10% of what it was, and an overnight period as above goes from 100% to 96%. Much better
source:http://forum.xda-developers.com/showpost.php?p=13397376&postcount=1
currently our phones use 150-200 mA (even in standby,and with setCPU on :O)...measured with Current Widget available on Market.
Edit2: Another thing that REALLY improves Android battery life on the HD2 is dumping your girlfriend.
Before, I needed to charge it almost twice a day. Lots of calls and messages.
Now, I can easily get two days of standby. LOL
The radio version on hd2 is a bit tricky. Its very different from people to people. Some people say that its related to your region too.
I dont know if its the same on HD Mini.
But some people here say the dont have battery drain. It would be nice to know what radio version they use and at what region they are.
tzacapaca said:
currently our phones use 150-200 mA (even in standby,and with setCPU on :O)...measured with Current Widget available on Market.
Click to expand...
Click to collapse
But if turn off wifi, gps and phone, consumption almost does not decrease.
Maybe this consumption of sdcard, because its slot is always hot.
ROM-Version (Vodafone)Switzerland German: 1.41.166.1, (10904) Radio:0.63.05.41
Strong battery drain is only after the first boot. after the third boot is the battery drain same as in wi-mo.. same experience with cm6(derefas) ,134++(schlund)
i don't really agree.
under android, the maximum we can expect is to get as much battery life as under winmo.
today, the phone consume too much battery when on sleep, because something prevents it to go sleep.
schlund has a fix for this battery drain, I tested it, it is really efficient.
it will be released in next release, be patient ;-)
regarding the android apps that tells you how much current you have:
it wont work if the phone is really sleeping, because all the apps would be put on sleep.
so you will never know how much your phone consumes when on sleep ;-)
I should say: after the third boot is the battery drain almost the same as in wi-mo , but the truth is that there is a big difference between the first and third boot in battery drain.
New battery Fix, I'm glad to hear.
I understand that it takes time to create something, I have patience but I think it is unfair to announce a new release for the end of the week and then change mind and do not give any explanation. I hope you'll accept this criticism. Thank you
codiak said:
The radio version on hd2 is a bit tricky. Its very different from people to people. Some people say that its related to your region too.
I dont know if its the same on HD Mini.
But some people here say the dont have battery drain. It would be nice to know what radio version they use and at what region they are.
Click to expand...
Click to collapse
well,telling region and radio version won't help with anything,I will not move from my city to get better signal and HTC won't make a new radio only for me too
btw,it's impossible to don't have battery drain when phone use 200mA
i guess people were talking about CM6 of derefas,but his version is based on r146 kernel,which still has battery issues...
p.s. since u own a HD2 also do u mind to test for me with Current Widget and tell me the values in standby and on?I read some guys had 6-7 mA in standby and i think around 60 while it was on
DmK75 said:
But if turn off wifi, gps and phone, consumption almost does not decrease.
Maybe this consumption of sdcard, because its slot is always hot.
Click to expand...
Click to collapse
i'm not an expert but i really think it's impossible sdcard will use 150-200mA,if it was so then we will have 5-6 hours battery life in WM
Edit: after little research i found this ->
Metric NAND SD
Idle (mW) 0.4 1.4
Read
throughput (MiB/s) 4:85 2:36
efficiency (MiB/J) 65.0 31.0
Write
throughput (KiB/s) 927:1 298:1
efficiency (MiB/J) 10.0 5.2
so SD cards use around 1,4 mW when idle and 2,36 mW when read from it(our case)
and to convert mW to mA-> http://www.ehow.com/how_8627497_convert-mw-ma.html
source: http://www.usenix.org/events/usenix10/tech/full_papers/Carroll.pdf
-r0bin- said:
i don't really agree.
under android, the maximum we can expect is to get as much battery life as under winmo.
today, the phone consume too much battery when on sleep, because something prevents it to go sleep.
schlund has a fix for this battery drain, I tested it, it is really efficient.
it will be released in next release, be patient ;-)
regarding the android apps that tells you how much current you have:
it wont work if the phone is really sleeping, because all the apps would be put on sleep.
so you will never know how much your phone consumes when on sleep ;-)
Click to expand...
Click to collapse
i'm not really agree with u too
Current Widget runs as a process,and processes are on even if Android is in suspended,no?for ex clock,alarm,etc
15MA1L said:
ROM-Version (Vodafone)Switzerland German: 1.41.166.1, (10904) Radio:0.63.05.41
Strong battery drain is only after the first boot. after the third boot is the battery drain same as in wi-mo.. same experience with cm6(derefas) ,134++(schlund)
Click to expand...
Click to collapse
lol I think that's placebo or else why would number of boots/reboots will improve the battery life?
tzacapaca said:
well,telling region and radio version won't help with anything,I will not move from my city to get better signal and HTC won't make a new radio only for me too
btw,it's impossible to don't have battery drain when phone use 200mA
i guess people were talking about CM6 of derefas,but his version is based on r146 kernel,which still has battery issues...
p.s. since u own a HD2 also do u mind to test for me with Current Widget and tell me the values in standby and on?I read some guys had 6-7 mA in standby and i think around 60 while it was on
Click to expand...
Click to collapse
I get about 3-7 mA with all on (GPS, BT, 3G etc). Sometimes there are peaks to around 60 mA that are related to mailcheck etc. Its roundabout 1-2% per Hour what is fine to me
codiak said:
I get about 3-7 mA with all on (GPS, BT, 3G etc). Sometimes there are peaks to around 60 mA that are related to mailcheck etc. Its roundabout 1-2% per Hour what is fine to me
Click to expand...
Click to collapse
u see?
this is what i'm talking about,u can't compare 3-7 mA to 150-200 mA..so i can't understand guys who said they have power usage same as on WM...
btw,that was in suspend or while display was on?
Thats with display off. When using it the value is very variable depending on what you are doing. From ~120 to ~350 mA.
tzacapaca said:
u see?
this is what i'm talking about,u can't compare 3-7 mA to 150-200 mA..so i can't understand guys who said they have power usage same as on WM...
btw,that was in suspend or while display was on?
Click to expand...
Click to collapse
lol ok
i read somewhere that the sdcard was using 10 to 50mA max, i dont think it uses so much. maybe someone using HD2 with Haret (on sdcard) could lighten us?
which application are they using to get those values, and how to read those values if screen is off?
codiak said:
Thats with display off. When using it the value is very variable depending on what you are doing. From ~120 to ~350 mA.
Click to expand...
Click to collapse
ok,thanks
what about when with display on and doing nothing?
I used an SD build on my HD2 before using the NAND Rom. The values where nearly the same. So I dont think sdcard has a big impact on battery.
I use this App from the Market. It logs to a file and you can view the history
tzacapaca said:
what about when with display on and doing nothing?
Click to expand...
Click to collapse
Then its around 120 mA.
But remember, HD2 has a BIG display
I don`t know, maybe it will be usefull for developers. I tested CM6 r146 releace from derefas.
All night in sleep mode it takes 10-15% of accum. Then I use it for maybe 4-5 hours, and android said that charge is needed (it was near 15%). Putting on charge, don`t bring any result, I wait for half an hour, no persets where moving.
Then I reboot the device i n WinMo and it shows me 70%, after it i use winmo for 2 days without charging...
It seems to me that the problem is with indicator... in my situation there was a good accum, but android don`t see it...
P.S. Sorry, if i am talking silly things
Hi,
I have Xperia Neo V, GB 2.3.4, rooted, NightElf 10, codename_ei8ght, OC: 245-1400 MHz. Usually I use SmartassV2 + SIO (I don't know why SIO - I've been told to choose this one, so I did). I've also briefly tested many other governors, but to tell the truth, I don't see much real-life difference between them (apart from the CPU Spy logs).
The problem that bothers me is that I need to manually change the OC settings, each time I want to use an app or a game that don't need 1400 MHz. For example - when I want to play AngryBirds. The game works perfectly on 1000 MHz, so why waste the battery power and generate lots of heat? But any governor I know, will "give" 1400 MHz to this game. That's why I need to switch down manually before playing. The same thing with many other apps, like, for example - navigation. It's absolutely enough to navigate on 1000 MHz, but any governor will set the CPU to 1400 MHz when Navigation is running.
Looking for a truly wise governor, that would give as much MHz as needed for an app/game to run smoothly, but not more. Is such a governor even possible to create?
Thank you.
illinoi for
from my observation SmartAssV2 tends to change frequency too much: up, down, up, down. As I mostly want to get best battery life I decided to switch to Conservative - the phone is still responsive (i can't notice difference) but when I dont do anything seems to keep the frequency lower.
I still have problem what governor to choose for sleep state - now im testing PowerSave (so it keeps minimal frequence) so far seems to work (even worked while playing music).
IMO writing too complicated governors could only slow down the system, so it is hard task to decide in real time which frequency is still sufficient for smooth play and at the same time as low as possible.
Do all governors have "deep sleep" mode? Is it governor-dependent at all?
I have some problems with refreshing news widget - I discovered that it is never refreshed in deep sleep mode.
Can I disable deep sleep mode?
Thanks.
Hello, I want to start discussion and know your experiences how much energy can you save using cpu scaling. I have read thread that explains governors and I/0 schedulers in threads like here: http://forum.xda-developers.com/showthread.php?t=1950084 and my Samsung galaxy Young (GT-S 5360) with JellyBlast 3.0.4 rom installed works best with:
min: 150 Mhz,
max: 832 Mhz,
governor: performance,
IO scheduler: SIO.
Heres the problem: I want to be os and webpages scroll in my phone fluent and it is near 100% only when using governor "performance" which is not recommended. But I dont see any affect to "miles per gallon" because when I run cpu statistics, the cpu is in "deep sleep" all time when dispay is off and phone locked. My phone usage is sometimes to make one or two phonecalls a day.
But there is another situation: dispay brightness on 25% or 50%, wifi or 3G turned on and browsing the web and listenting music during travelling.
So my question is if there is some posibility to measure how many watts eats each hardware component in phone and because of this if it makes any sense to dynamically scale down CPU when display and wirelles are active, how many electricity can it save compared to consumption of display and wirelles, lets say +5 or +10 percent time on battery makes no sense for me.
I believe there is a great deal of confusion or lack of technical explanation available here in the community, when we discuss the how’s, why’s and what’s behind the things we choose to modify in the Android OS in an attempt to squeeze better performance from a very complex operating system. Many of the things I tend to see presented to users are focused on very ineffective and ancient mentalities, pertinent to an older version of the operating system. Much of this is attempted through modifying build properties, and that’s usually about where it stops. My objective here is to describe some of the ins and outs of tuning a mobile operating system such as Android, and looking at it in a different light - not the skin you lay on top of it, but as advanced hardware and software, with many adjustable knobs you can turn for a desired result.
The key players here are, usually, without fail a couple of things alone:
Debloating – which, I suppose, is an effective way to reduce the operating system’s memory footprint. But I would then ask, why not also improve the operating system’s memory management functions?
“Build prop tweaks” – which is a file where you can apply very effective changes like the ones presented in my post_boot file (the only difference being when they are executed, and how they are written out), but most of the “tuning” done here focuses on principles that were only once true and, thereby, mostly irrelevant in today’s latest versions of Android. There are many things within the build.prop that can (and sometimes should) be altered to directly impact the performance of the DVM/JVM. However, this is almost always untouched. Every now and then, somebody will throw a kernel together with some added schedulers, or some merged sound drivers, etc., but there is really little to no change that would effect real time performance.
So, what about the virtual machine? What about the core operating system? – what Android actually is – Linux.
Many of you have been pretty blown back by how effective some simple modifications to just 1 shell file on your system have been at improving your experience as a user. Your PM’s, posts, and comments in my inbox/thread are telling enough about the direct impact on battery life. These are differences you can feel and see, quantify. Because the changes made within that file are directly impacting functional aspects of the hardware, throughput/latency, and most importantly, the device’s memory management (which is so complex, you could literally write a book about it… and books about it do exist – they are very long books).
So, how did we manage to make your device feel like it was reborn with just 1 file and not an entire ROM? That ROM you were on, suddenly was not so stock feeling, right? Not to say those ROMs were stock, they were, indeed, modified. But the core operating system was, for the most part, largely untouched. Maybe you had a little more free RAM because of the debloating but, really, that was about all of the effect that you saw/felt.
My aim here is to talk about, at a medium to in-depth level, what exactly went into that 1 file that turned the performance corner for your device. For the sake of keeping it to the important points, I’ll cover the 3 most important (as titled in my main thread): Your CPU, IO, and RAM (VM). Part 2 will cover IO, and Part 3 with cover the nuts and bolts of the RAM (VM).
Let’s look at a snippet of some code from the portion of the file where most of the CPU tuning is achieved, we’ll use cluster two’s example (bear in mind, the methodology here was used for cluster 1 as well [your smaller cores were treated the same]):
Code:
# configure governor settings for big cluster
echo "interactive" > /sys/devices/system/cpu/cpu4/cpufreq/scaling_governor
echo 1 > /sys/devices/system/cpu/cpu4/cpufreq/interactive/use_sched_load
echo "10000 1536000:40000" > /sys/devices/system/cpu/cpu4/cpufreq/interactive/above_hispeed_delay
echo 20 > /sys/devices/system/cpu/cpu4/cpufreq/interactive/go_hispeed_load
echo 10000 > /sys/devices/system/cpu/cpu4/cpufreq/interactive/timer_rate
echo 633600 > /sys/devices/system/cpu/cpu4/cpufreq/interactive/hispeed_freq
echo 1 > /sys/devices/system/cpu/cpu4/cpufreq/interactive/io_is_busy
echo "40 864000:60 1248000:80 1536000:90" > /sys/devices/system/cpu/cpu4/cpufreq/interactive/target_loads
echo 30000 > /sys/devices/system/cpu/cpu4/cpufreq/interactive/min_sample_time
echo 0 > /sys/devices/system/cpu/cpu4/cpufreq/interactive/max_freq_hysteresis
echo 70 > /sys/devices/system/cpu/cpu4/cpufreq/interactive/gpu_target_load
So what did I do here? Well, let’s start by explaining the governor, and then its modules.
Interactive: the interactive governor, in short, it works based on timers and load (or tasks). Based on load when the timers are ticked and the CPU is polled, the governor decides how to respond to that load, with consideration taken from its tunables. Because of this, interactive can be extremely exact when handling CPU load effectively. If these tunables are dialed in properly, according to usage and hardware capability, what you achieve is maximum throughput for an operation, at a nominal frequency for that specific task, with no effective delay experienced in the UI. Most of the activity seen in an Android ecosystem is short, bursty usage, with the occasional sustained load intensive operations (gaming, web browsing, HD video playback and recording, etc.). Because of this unique user-interaction with the device, the default settings for interactive are, usually, a little too aggressive for a nominal experience – nominal meaning not “over-performing” to complete the task and wasting precious CPU cycles on a system that is not always near an outlet. The interactive tunables:
use_sched_load: when this value is set to 1, the timer windows (polling intervals) for all cores are synchronized. The default is 0. I set this to 1 because it allows evaluation of current system-wide load, rather than core specific. A small, but very important change for the GTS (global task scheduler).
above_hispeed_delay: when the cpu is at or above hispeed_freq, wait this long before increasing frequency. The values called out here will always take priority, no matter how busy the system is. Notice how I tuned this particular setting to allow an unbiased ramp up until 1.53 GHz, which then calls for .4 seconds delay before allowing an increase. I did this to handle the short bursts quickly and efficiently as needed, without impacting target_load (the module, in this way, allows the governor free range and roam according to load, then, is forced to wait if it wants to utilize those super-fast but power-costly speeds up top). However, sustained load (like gaming, or loading web pages) would likely tax the CPU for more than .4 seconds. The default setting here was 20000. You can represent this expression as a single value, followed by a CPU speed and delay for that speed, which is what I did at the 1.53 GHz range. I usually design this around differences in voltage usage per frequency when my objective is more to save power, while sacrificing more performance.
go_hispeed_load: when the CPU is polled and overall load is determined to be above this value (which represents a percentage) immediately increase CPU speed to the speed set in hispeed_freq. Default value here was 99. I changed it to 20. You’ll understand why in a second.
timer_rate: intervals to check CPU load across the system (keep in mind use_sched_load). Default was 20000. I changed it to 10000 to check more often, and reduce the stack up delay the timer rate causes with other tunables.
hispeed_freq: counterpart to go_hispeed_load. Immediately jump to this frequency when that load is achieved. Default here, in Linux, is whatever the max frequency is for the core. So, it would have been 1.8 GHz when load is 99%. I changed this value to the next speed above minimum for both a53 and a57 clusters. The reason I did this was to respond appropriately to tiny bits of thread usage here and there, which minimizes the probability that the CPU will start overstepping. There are a lot of small tasks constantly running, which could allow the 384 MHz frequency to be overwhelmed by some consistent low taxing operation. The trick with this method of approach is to stay just ahead of the activity, ever so slightly, to increase efficiency, while removing latency for those smaller tasks. There is no hit in power by doing this. This principle of approach (on broad and overall scale, even) is how I use interactive to our advantage. I remove its subjective behavior by telling it exactly where to be for a set amount of time based on activity alone. There are no other variables. “When CPU load is xxxx, you will operate within these windows (speeds) alone.”
Keep in mind, with some of this, I am just giving you default values and examples… The original file that LG or Qualcomm or whoever placed in there had done their own weird crap with this stuff that didn’t make any sense whatsoever. Timer intervals were not divisibles of others, there was little logic and reason behind it.
io_is_busy: when this value is set to 1, the interactive governor evaluates IO activity, and attempts to calculate it as expected CPU load. The default value is 0. I always set this to 1, to allow the system to get a more accurate representation of anticipated CPU usage. Again, that “staying ahead of the curve” idea is stressed here in this simple but effective change.
target_loads: a general, objective tuneable. Default is 90. This tells the governor to try to keep the CPU load below this value by increasing frequency until <90 is achieved. This can also be represented as a dynamic expression, which is what I did. In short, mine says “do not increase CPU speeds above 864 MHz unless CPU load is over 60%... do not increase CPU speeds above 1.24 GHz unless CPU load is over 80%” and so on… So, you can see how we are starting to better address the “activity vs. response” computing conundrum a little more precisely. Rather than throw some arbitrary number like 90 out there, I specifically utilize a frequency window with a percentage of system-wide usage or activity. This is ideal, but takes careful dialing in, as hardware is always different. Some processors are a little more efficient, so lower speeds are ok for a given load when compared to another processor. Understanding the capability of your hardware to handle your usage patterns appropriately, is absolutely critical to get this part right – the objective is not to overwork, or underwork, but to do just the right amount of work. Turning small knobs here and there, then watching how much time your CPU spends at a given speed, and comparing that with real time performance characteristics you observe, etc… maybe there is a little more stuttering in that game you play after this last adjustment? OK, make it slightly more aggressive, or let the processor hang out a bit more at those high/moderately high speeds.
min_sample_time: this is an interval which tells the CPU to “wait this long” before scaling back down when you are not at idle. This is to make sure the CPU doesn’t scale down too quickly, only to then have to spin right back up again for the same task. The default here was 80000, which is way too aggressive IMO. Your processor, stock, would hang for nearly a second at each step on its way down. 3/10th of a second is plenty of time for consistent high load, and just right for short, bursty bits of activity. The trick here is balancing response, effectiveness, acceptable drain on power, with consideration to nominal throughput for an execution.
max_freq_hysteresis: this only comes into play when the maximum frequency is hit. This tells the governor to keep the core at the maximum speed for this long, represented in tenths of a second, PLUS min_sample_time. Default value was 3, if I remember correctly. Which means that every time your CPU hit max, it was hanging there for 1.1 seconds arbitrarily, regardless of load.
gpu_target_load: the GPU will scale up if CPU load is above this value. Default is 90. This module attempts to anticipate GPU activity based upon CPU activity. It works in parallel with the GPU’s own governor algorithms and each cluster of cores has its own tunable for this controller.
Standby in the near future for the write up on IO and RAM management.
Very nice write up. While the tunables for various governors are a bit out of my range of expertise, the explanation here almost makes me want to play with them to fine tune my system to my usage.
What would you say is the maximum percentage of battery life one would expect to increase by? I read an article here a few years ago where someone had the idea that regardless of tweak, you won't increase your battery life by more than 2%, which is pretty small. I wasn't sure how accurate this statement was, but I am always up for improving my battery life, although Marshmallow has done wonders for it in comparison to Lollipop.
freeza said:
Very nice write up. While the tunables for various governors are a bit out of my range of expertise, the explanation here almost makes me want to play with them to fine tune my system to my usage.
What would you say is the maximum percentage of battery life one would expect to increase by? I read an article here a few years ago where someone had the idea that regardless of tweak, you won't increase your battery life by more than 2%, which is pretty small. I wasn't sure how accurate this statement was, but I am always up for improving my battery life, although Marshmallow has done wonders for it in comparison to Lollipop.
Click to expand...
Click to collapse
The maximum percentage or battery life “savable” by doing this type of thing is really going to depend on a lot of variables. In testing, it is most important to first establish a baseline, and a valid method to measure your outputs. A model/example would be: I am going to charge my phone to 100%, have it in airplane mode, and only have these 3 apps installed to run the tests. After I charge it, I am going to leave the display on for an hour straight, without interacting with the device, then let it sleep for 4 hours, wake it back up, open and close app#1 150 times, then let it sleep again, etc… maybe for the sake of merely evaluating CPU usage, you would disable the location, turn off auto brightness and set the display at minimum, all for the sake of creating a repeatable environment each time you make a change and want to measure impact of that change, etc… you see where I am going with this. Removing variables to quantify impact of the changes you made, would be critical.
What I am getting at is I would seriously doubt the number this individual threw out (2%) has any real merit, for several reasons.
The first and foremost reason being is that he probably didn’t run a valid set of tests to come to that number. However, even if he did, the potential to save power becomes greater as hardware becomes more and more efficient through technological advances. Chips are not what they were even a year ago in that aspect. The Snapdragon 820, for example, or some of the newer Exynos chips from Samsung – all of which are using the 14nm finf process – are extremely efficient in power management.
Displays, all of these things – there is more noticeable impact over extended periods of time when you are talking about “trim a little here” “save a little there”.
To put this into a different perspective, but where the principle still applies, you can look at how efficiency of engines (cars) are impacted by drivetrains. Your car has a rated HP, and the translated power to the ground is some percentage of that rating. That percentage is constant, no matter how much HP your car has, the drivetrain is only (e.g.) 80% efficient at translating that power directly to the pavement. So, your car has 200 HP, 80% is translated to the pavement, meaning its brake HP is 160 HP. If you increase your cars HP to 1000 (engine rated), it is transferring 800 HP to the pavement. Now, make that drivetrain just 5% more efficient… The difference at 200 (engine rated) HP is only 10 HP… But the Difference at 1000 is 50. You are getting 5 times more bang for your buck.
It is important to note that while the percentage is the same, in that example, it is merely an example. Horsepower in that example could be translated to “extra time off the charger” or not a percentage of battery life, but a percentage of screen on time increased. If it is proportionate, you are talking about maybe an extra 15 or 20 minutes of physical interaction with the device before it needs to be plugged in. Again, these are just examples, but the overall impact can be dramatic on a system that is already doing very well at providing the user a long duration of screen on time before it needs to be connected to a wall.
Another example would be gas mileage... this might be more relevant for what we are talking about. Imagine you have a car that is a big, mean v8, and literally gets 7 mpg. It has been way overdue for an oil change, and the oil is now causing the engine to run just slightly less efficient. Well, the car would likely run out of gas before you even noticed the loss in gas mileage, because it is already an inefficient mechanism when it comes to saving gas.
Take another car, that has a rated MPG of 60. Now imagine it is also overdue for an oil change. I would certainly say that the smallest bit of inefficiency in the engine, added weight to the vehicle, less aerodynamic it is… you will certainly see the effects of that more as its expected distance on a single tank of gas is 650 miles, as opposed to the other big v8, that can only go 80 total.
Imagine these two cars as older and newer technology in processors. The newer technology has greater potential for power saving simply because its baseline is already a fairly efficient platform. A small change will take it a greater distance. That car going 650 miles on one tank of gas… well, you’ll notice if that MPG drops by 3%... because you’ll be filling up at 600 miles.
In summary, if my phone is going to die after 2 hours of being off the charger anyways, because it has a small battery, or it’s display is chewing up 95% of the overall power draw, then yes, you are pretty much wasting your time playing with hardware settings otherwise. But that is not the case anymore. Mobile devices are the opposite – very efficient. Which means there is greater potential to minimize their power consumption by tuning, say, a CPU governor to not overreact to activity initiated by the user.
Again, very subjective statement made about 2%... It means nothing…. 2% could be 20 extra minutes of a phone call, 15 minutes of screen on time… etc. You see my point.
Why I have to use governors that save power, this is not phone, and radiator on this tablet is finally made as it supposed to be (actually present now not absent duh), so throttling and heat is not there. Conditions have changed and we should use performance governor! After overclocking my desktop and knowing that i just put it on max frequency all the time knowing that it will be the best performance (though it auto changes power profile in inactivity)
I decided why didn't I do the same with the tablet.
I changed to performance governor and set tresholds up/down 10 points lower/higher in ex kernel for big cluster and everything running smoother, i didn't find any measurable battery toll, not that i would care about it. Though i can see it pulls significantly more current when reloading browser pages like 2000mAh. I automated this kernel profile change with franko kernel m., because I am not sure yet if to put it on permanently if it will causes standby drain, it shouldn't but it can. I find that it noticably faster with heavy tasks, not huge difference, but I I feel it, and higher amperage and seeing larger cores loaded more confirms it.
You can try and share how did it go. Anybody has other advices how to tune this kernel for performance.