[Q] RAM management - Swap, Compcache, Memory Manager - Android Software/Hacking General [Developers Only]

I did a cursory search of the forum, but did not really find anything that answered my questions to my satisfaction. Anywho, I would like to gauge other users' opinions of how they run their particular memory management on their phones. From what I've gathered so far between the xda forums and cyanogenmod wiki, the hierarchy of how memory space is managed by the OS is as follows:
1. Compcache (if using compcache & backing swap)
2. Swap
3. Memory Manager (MM)
In any use-case scenario, I believe that once Compcache and/or Swap are exhausted, MM steps in and kills off processes with high OOMs. Frequency of Compcache/Swap is regulated by "swappiness", which can be set to a value [0-100] (lower values correspond to less paging out by the OS, and visa-versa). What I am still concerned/confused about is the following:
A. To what extent does the scaling of swappiness affect paging? Is there a ratio between swappiness and the number of pages stored/retrieved per tick?
B. If the above hierarchy is valid, then when is the MM activated to kill processes? Does this only occur if the worst-case is reached (RAM and Compcache/Swap partition are filled)?
C. My handset (HTC Hero CDMA) has 195204 KiB of RAM. A quick browse of that phone's forum has shown me that most users keep their swap partitions between 32-64 MiB. A number of users within that forum have also mentioned that raising swap size beyond those values can lead to performance degradation. Is that due to how the Android kernel functions? I know that in a desktop OS environment (Windows/OS X/Linux), swap partitions can be as large as users want them to be, and that there is no performance degradation (with respect to swappiness, that is).
Well, I guess that's all of my lingering concerns/questions. Any input will be greatly appreciated. Thanks.

Related

WM6 and Performance Tweaks?

Most of the recent WM6 rom chefs have been advocating making NO performance tweaks, in favor of keeping as large a RAM pool as possible. As I rarely need 30mb to run a program, I am happy to give up what I don't need if it will help get data back and forth to the SD card and so on faster. Has anyone got thoughts or data about this? I don't own a benchmarking program so I can't check it out directly. I have been making all the tweaks anyway, but does it matter?
Thanks for your thoughts!
Ed
X-Plore 1.1
IPL/SPL 3.08
GSM 2.69.11
edhaas said:
Most of the recent WM6 rom chefs have been advocating making NO performance tweaks, in favor of keeping as large a RAM pool as possible. As I rarely need 30mb to run a program, I am happy to give up what I don't need if it will help get data back and forth to the SD card and so on faster. Has anyone got thoughts or data about this? I don't own a benchmarking program so I can't check it out directly. I have been making all the tweaks anyway, but does it matter?
Thanks for your thoughts!
Click to expand...
Click to collapse
I agree completely! I'd like to see a WM6 ROM with all the performance tweaks and 8 MB page pool. I know jwzg is working on an 8MB pp ROM based on Faria's up coming Vanilla WM6 ROM.
Check out this thread for more info http://forum.xda-developers.com/showthread.php?t=299584&page=10
Thanks for the link. I really don't understand the drive for smaller and smaller page pools either...
Some Answers!
OK, here is my contribution to the WM6 literature...
I am running battery status 1.04 beta 3 with the following settings in all tests: cpu speed 247, cpu scalar min 143, boost 278. set on wakeup, remember last speed. My base setup is as per my signature. I ran SK Tools v 3.1.1.0 in demo mode. I also removed the HKLM\init launch100 key in both cases.
All tweaks, No tweaks
Integer (moves/25us) 134.0864, 134.4001
Floating point MWIPS 3.490, 3.489
RAM Access speed index 345, 328
Draw bitmaps speed index 503, 522
Main storage (w) KB/sec 607.78, 612.14
Main storage (r) KB/sec 3670.25, 3469.23
Storage card (w) KB/sec 412.76, 423.11
Storage card (r) KB/sec 3353.71, ! 1119.13
As you can see, the major difference is in the storage card read speed. This led me to retest using only the SD card speed tweak, and no others. Surprisingly, the result was unchanged from using no tweaks! So, likely there is some interaction with the other file system tweaks that is involved. (See the wiki-WM5 performance tweaks). At some point maybe I'll try to pin it down further.
Regards,
Ed
BTW: Sorry for the poor formatting, for some reason the extra white space between columns is being suppressed in the post.
When I was using NotTooSmart's ROM, it had some performance tweaks. I don't have a benchmark prog but it was definitely much faster. I would say it's comparable to when I had it overclocked to 234-247MHz...
I believe what made the most difference was the System Cache... I lost ~10MB of RAM but the ROM was flying... Start up was scary though... I think it went <2MB w/ the progs I had...
edhaas said:
Thanks for the link. I really don't understand the drive for smaller and smaller page pools either...
Click to expand...
Click to collapse
A lot of people tend to be RAM fanatics... that's probably what drove cooks to have smaller and smaller page pools... Another thing is people and numbers.. many tend to feel the bigger, the better.. High IPL/SPL, High Radio, High OS, High Storage, High RAM.. I think you get the picture.. =P
Update on tweaks
I think I'm near the max. I maxed out the file cache, and filter cache, kept the SD cache at 256 and re-ran the benchmarks. Slightly higher numbers all round, but a dramatic increase in SD card read rate, now up to 6.5 mb/sec! I would expect this would speed loading those big programs and files from the SD card, and is 6 times the "stock" speed!.
Regards,
There was a post a few weeks ago (I think) where someone did comparisons with playing with PagePools and the performance. They compared 4MB, 6MB, 8MB, and 12MB pagepools. As I recall there was very little difference between 12MB and 8MB performance. I think 6MB was the worst of the 4.
Again this was all from memory, but I just remember after reading that, I no longer was that concerned about the differenence in performance over the added extra memory available by dropping to 8MB.
Performance tweaks
Actually, in thinking about the issue, it occurs to me that the standard benchmarks we are using (SPB Tools) don't measure things that would likely be changed by a change in page pool. CPU calculations, memory access speeds, would not change by changing the page pool or buffer sizes. The only measurement which would change would be the speed of swapping programs and data in and out of memory (by suppressing the actual need to do so) or accessing the memory card. However, these things *would* impact on "real life" apparent speed of the device in activation of programs and quick response times.
Thoughts?
Forgive my obvious ignorance... This is the closest thread I have found for my search, "SD card speed tweak" so can you please help me? point me to the tweak to speed up my SD card?
thanx in advance!
Re: Speed tweaks
Sure, If you want awesome numbers on SK Tools SD read benchmark, (particularly when combined with overclocking) make these registry changes:
HKLM>Drivers>SDCARD>ClientDrivers>Class>MMC_Class:
Change BlockTransferSize to 256 decimal
HKLM>Drivers>SDCARD>ClientDrivers>Class>SDMemory_Class:
Change BlockTransferSize to 256 decimal
HKLM>System>StorageManager>FATFS:
Change CacheSize to 4096, 8192, or 16384 decimal
HKLM>System>StorageManager>Filters>freplxfilt:
Change ReplStoreCacheSize to 4096, 8192, or 16384 decimal
The larger the numbers the faster the benchmark. However, some of the other benchmarks run slighly slower, and I'm not sure I see significant "real life" improvements in responsiveness. I'd be interested in your impressions. One thing to watch out for, particularly when using the 16384 settings, is that available memory can drop to "dangerously" low levels on start up from soft reboot. If you're using batterystatus you can monitor this. As long as you stay above 2mg or so at the minimum you're ok, as the situation resolves after the start up routines finish. If you do go below, I've had the screen blank temporarily and hang for a moment, but it eventually booted fine anyway.
Have fun!
Thank you for your prompt and courteous answer!! I am still learning this PocketPC stuff. Someday I hope to be able to contribute. It already seems faster!
email tweaks
is there anyway to make my pics in emails auto download?
(instead of having to click "download pics" every time...)
and to create shortcuts to my text messages and other applications, how can i do that?
b.mann said:
is there anyway to make my pics in emails auto download?
(instead of having to click "download pics" every time...)
and to create shortcuts to my text messages and other applications, how can i do that?
Click to expand...
Click to collapse
This question is slightly offtopic, but I'll answer you anyways.
Go to the email account you want to change:
Menu/Tools/Options/Choose The Account (it will take you into email setup):
Next/Next/Next/Options/Next/Next/Download size limit (drop down menu - choose what you want)/Finish
Hi,
I saw the benchmarking results that you guys posted and the difference between "with tweaks" and "without tweaks". The numbers sure show a difference with the benchmarking results but what i'd like to ask and what i'd really like to know is - have you noticed a significant difference in actual/real life performance on ur wizard? Was it obviously faster?
I mean, for me and IMHO, i'm not much of a fan of "benchmark" results and all that unless I actually see a "real" difference in speed when i use my PPC. I don't think i'll go for the performance tweaks if i'll loose 10+MB of RAM and am only able to see "benchmark" results being better instead of overall actual performance. That's why i'd like to get ur inputs on this whole performance tweaks thing...is there a noticeable difference in speed? (not just benchmark data)
WM 6.1 Tweaks
Hi,
Even the thread is quite old,
after some time of using WM6 and 6.1 and test meny mor etweaks, there I post some of them who i found usefull.
TKS to all contributors form xda or another.
1. Stop 3G services: settings\phone\ HSDPA must be disabled; RAT set to GSM; the internt still accesible trought GPRS for the most operators
Result in: less batery consumption 1-2 days stdby increase to 3-4 days
reduce blockings and wake-up problems
2. Disable Power management for SD card: use poket toolman or others and uncheck Enable Power Mgmt for SD card; or use regedit and change to
[HKEY_LOCAL_MACHINE\Drivers\SDCARD\ClientDrivers\Class\SDMemory_Class]
“DisablePowerManagement“=dword:00000001
Other option:
Change reg into
[HKLM\System\StorageManager]
“PNPUnloadDelay“=dword:8196
[HKLM\System\StorageManager]
“PNPWaitIODelay“=dword:8196
Note that the 8196 should be entered as a DECIMAL value. The HEXADECIMAL (HEX) equivalent is 0×00002004.
Result in: Less blocking and sd diseaparing fix or slow upload sd when wake-up
More consumption on batery, about 10% more, but with tweak 1 still OK
3. Uncheck today timeout: settings\items\ uncheck Today timeout
Result in: less delay when a phone call income o r standby resume
4. Try to instal the alarm programs and sounds files direct into main memory instead of SD; to avoid sd blocking when standby resume
5. Install .NET Compact Framework 3.5 (last vers) to your device, as:
1. Download .NET Compact Framework 3.5 from Microsoft and save it on your PC.
2. Run the downloaded MSI file and let it install.
3. Connect your device to Activesync/Windows Mobile Device Center and finish the automatically launched installation on your device.
4. Soft reset your device.
5. Open a Registry editor and navigate to HKLM\Software\Microsoft\.NETCompactFramework where you will see two entries for the (now two) existing version references: the old one, which came with your device and the new one you just installed.
6. Change the DWord value of 3.5.7283.00 from 0 to 1 (thus enabling it) and all the other values (i.e.: 2.0.7045.00) from 1 to 0 (thus disabling it/them).
7. Soft reset your device.
Result in: shorter time (gain 0.5 sec) to navigate trough windows menus and buttons actions.
6. Activate lock applet on today menu; Without this function when the phone is in stand-by and a call income the phone delay has about 8-10s to wake-up.
Result in: the wake-up on call is shorter (gain 4-5 sec) than without this lock checked in today settings; somehow WM use this library to pass trowght to wake up.
7. Speed-up the SD card read; tks to edhaas contributor from xda-developers.
Action: increase some SD cache into registry:
a) HKLM>Drivers>SDCARD>ClientDrivers>Class>MMC_Class:
Change BlockTransferSize to 256 decimal
b) HKLM>Drivers>SDCARD>ClientDrivers>Class>SDMemory_Class:
Change BlockTransferSize to 256 decimal
c) HKLM>System>StorageManager>FATFS:
Change CacheSize to 4096, 8192, or 16384 decimal
d) HKLM>System>StorageManager>Filters>freplxfilt:
Change ReplStoreCacheSize to 4096, 8192, or 16384 decimal (16384 is dangeours high, some blank screen at startup)
a), b) settings are regulary set by default to 256; c), d) is by default to 0, so change-it and see if gain some perf.
All of them has tested and works fine.
Apply and now I found my i-mate ultimate 6150 OK, instead of first phone impression when I blame-it.

KSM, does it really improves performance ?

Well sadly i don't have an answer for that question yet...
I'm trying to think of a way to put KSM to the test on my android device.
As far as i understand it is possible that the kernel actually causes high CPU usage trying to map and unmap memory pages over and over again.
This issue is known for linux and other virtual machines so it is possible that the Same effect will be on the android vm
Testings that i found are not relevant to android.
For example:
The result is a dramatic decrease in memory usage in virtualization environments. In a virtualization server, Red Hat found that thanks to KSM, KVM can run as many as 52 Windows XP VMs with 1 GB of RAM each on a server with just 16 GB of RAM. Because KSM works transparently to userspace apps, it can be adopted very easily, and provides huge memory savings for free to current production systems. It was originally developed for use with KVM, but it can be also used with any other virtualization system - or even in non virtualization workloads, for example applications that for some reason have several processes using lots of memory that could be shared.
Click to expand...
Click to collapse
http://kernelnewbies.org/Linux_2_6_32
What i would really want to know is what would happen if each of these VMs Would run a different application/game/audio/graphics software at the same time ? or what if the same vm will run many different apps ? and also to compare cpu usage with and without KSM
Guess i'll need a tool for that. something like 'iostat' but for memory diagnostic and another tool to see a per process CPU usage but 'top' is not good enough for that.
Any way, the best test should present clear results with precised data.
I'll keep looking for legit way to put it to the test.
If you can think of a way to test KSM with android, please let me know.
This is a technique that relates mostly to processes like virtualisation. For example, when you load 5 windows XP VMs, you'll have a good 10 - 20 services that are practically the same in memory in each VM. Instead of each service using 10mb (ie, 10mb x 5 = 50mb), you only need say 15 or 20mb using KSM. If you use different applications, it is very unlikely that anything would be saved FOR THAT APPLICATION. However, the main elements of a Windows XP System would still be there (drivers, explorer, firewall, logon, search and so on). Means little in one setup, but when you have several VMs it is shown to be a huge advantage. As we know a simple XP install can use 500mb of RAM actively, and this is fairly uniform across instals.
With android, i don't know if there are specific RAM savings to be had. Don't know enough about the inner workings and the sandbox android puts its apps in or how apps interact with system services. Sadly, i can't think of a good way to test it out either, but i'll be keeping an eye on this topic for someone (much) more knowledgeable to come along.
Harbb said:
Sadly, i can't think of a good way to test it out either, but i'll be keeping an eye on this topic for someone (much) more knowledgeable to come along.
Click to expand...
Click to collapse
Enter bedalus, stands there with a vacant expression on his face. Harbb looks disappointed.
kernels ; battery ; ROM ; gov/sched
That entire paragraph was dedicated to you bedalus, we both know that.
Lol
I hope someone can answer this though.
kernels ; battery ; ROM ; gov/sched
Wait for someone............
Sent from my Nexus S using xda premium
KSM does not improve performance on Android just like that - all enabling KSM does, is enable SUPPORT for the Feature but Applications would have to make use of the feature, which they don't.
You can easily verify this like that :
echo 1 > /sys/kernel/mm/ksm/run
<wait and/or run the Applications of your choice>
cat /sys/kernel/mm/ksm/pages_sharing
IF the above shows a value > 0 then you are making use of KSM else it's just available, without anyone using the feature.
Here's an interesting Article that gives a little more insight :
http://www.linux-kvm.com/content/using-ksm-kernel-samepage-merging-kvm
By the way, the same is true for ZCACHE. If you really want to make better use of your Memory (RAM) then using ZRAM as a Swapdevice does work (and may often make sense, too).
That all said : There appear to be efforts to make use of KSM http://forum.xda-developers.com/showthread.php?t=1464758 - so things may well change ...
any update on this...?

[INFO][SHARE] What is zRam in Kernel?

Some budding devs like me and some others have asked this question and got this answer!
Firstly I want to thanks all who supported me!
My Parents for buying me an Android Device and Supporting
-CALIBAN666- for his thread
franciscofranco for his definition on zRam
abhisahara
Sniper Killer
And all those whom I have forgotten to mention!
Originally posted by Wikipedia
Q: What is zRam?
A: zRam is a module of the Linux kernel, previously called "compcache". zRam increases performance by avoiding paging on disk and instead uses a compressed block device in RAM in which paging takes place until it is necessary to use the swap space on the hard disk drive. Since using RAM is faster than using disks, zRam allows Linux to make more use of RAM when swapping/paging is required, especially on older computers with less RAM installed.
Google has also said to enable zRam as default for Chrome OS Devices!
Click to expand...
Click to collapse
Originally posted by franciscofranco
The zram module creates RAM based block devices: /dev/ramX (X = 0, 1, ...).
Pages written to these disks are compressed and stored in memory itself.
These disks allow very fast I/O and compression provides good amounts of
memory savings.
Basically is for storing swapped pages into compressed memory ram.
Click to expand...
Click to collapse
-CALIBAN666- said:
I think its better to Post this here,when its not better,than sorry!!!
-----------------------------------------------------------------
Once a brief statement for those who are not traveling so long in the Android scene:
ZRAM = ramzswap = Compcache
In order to explain more precisely ZRAM first need other terms are more clearly defined:
Swap can be compared with the swap file on Windows. If the memory (RAM) to complete the PC the data that are being used not actively outsource (eg background applications) so as to re-evacuate RAM free. To this data is written to a hard disk. If required, this data is then read back from there easily. Even the fastest SSD is slower than the RAM. On Android, there is no swap!
In ZRAM unnecessary storage resources are compressed and then moved to a reserved area in the fixed RAM (ZRAM). So a kind of swap in memory.
This Ram is more free because the data then only about 1/4 of the former storage requirements have. However, the CPU has to work in more because they compress the data has (or unpack again when they are needed). The advantage clearly lies in the speed. Since the swap partition in RAM is much faster than this is a swap partition on a hard drive.
In itself a great thing. But Android does not have a swap partition, and therefore brings Android ZRAM under no performance gain as would be the case with a normal PC.
In normal PC would look like this:
Swap = swap file (on disk) -> Slow
ZRAM (swap in RAM) -> Faster than swap
RAM -> Quick
With Android, there is no swap partition, and therefore brings ZRAM also no performance boost.
The only thing that brings ZRAM is "more" RAM. Compressed by the "enlarged" so to speak of the available memory. That's on devices with little RAM (<256MB) also pretty useful. The S2 has 1GB but the rich, and more than. There must not be artificially pushed up to 1.5 GB.
After you activate the ZRAM also has 2 disadvantages. The encoding and decoding using CPU time, which in turn has higher power consumption.
Roughly one can say (For devices with more than 512MB RAM):
Without ZRAM: + CPU Performance | + Battery | RAM
With ZRAM: CPU Performance |-Battery | + RAM
For devices with too little RAM so it makes perfect sense. But who shoots the S2 already be fully complete RAM and then still need more?
Check whether you can ZRAM runs in the terminal with
free or cat / proc / meminfo
I hope it helps to understand zRam!!!!
Click to expand...
Click to collapse
So basically zRam module in kernel increases and optimizes performance!
I didn't write all this information, I just compiled it together in one thread for ease :fingers-crossed:
But making it work is an headache.. you have to add LZO(Lempel–Ziv–Oberhumer) compression through menuconfig and then run zRam scripts through init.d and what not..
robowarrior1377 said:
But making it work is an headache.. you have to add LZO(Lempel–Ziv–Oberhumer) compression through menuconfig and then run zRam scripts through init.d and what not..
Click to expand...
Click to collapse
This thread is a information share thread
It is not a thread in how to enable it in your kernel -_-
Thanks for the info
Sent from my GT-N8000 using xda premium
Zram = extravagant battery
so it makes wasteful battery zram?
excuse me if my English is poorly
gj man thanks for info very helpful :good:
Change the first line of the thread.

We destroy myths about Android optimization methods ...

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Wandering through the forums and various websites dedicated to Android, we are constantly confronted with tips on how to increase the performance of the smartphone. Some recommend to include swap, others - add special value to build.prop, and others - to change variables of the Linux kernel. Such recipes in different ways you can find a huge amount of that on the XDA and 4PDA. But do they actually work
Tenacity with which some seemingly competent smartphone users are trying to push their ideas public optimum adjustment Android and the underlying Linux kernel. And the right to it was limited to a slight tuning virtual memory management subsystem, or the inclusion of experimental options. No, we usually offer a very long use scripts that change literally every variable core, remount filesystems with different odd options, including the swap, activate various system daemons and perform billions of different operations.
No, well, you can, of course, assume that the Linux kernel, Android and proprietary firmware for smartphones develop illiterate idiots, whose work must be radically alter, but in practice some reason it turns out that the most famous tuning tools, published on XDA, - it nothing but a hodgepodge of disparate huge number of recommendations, it is not clear who invented and no one knows why. The absurdity of the situation reaches that these instruments can be found rows copied unchanged from the scripts to increase Linux-server performance under heavy loads (I'm not kidding, look at the contents of the famous script ThunderBolt!).
In general, the situation is more complicated than. All advise all, no one suggests anything, and those who understand something, sitting and drinking tea and laughing over what's happening.
Swap
Let's start with the swap - the most absurd ideas of all that you can think of for use in smartphones. Its purpose is to create and connect the paging file, thereby manage to free storage space in memory. The idea itself is certainly sensible, but only if it is a server, which rests on the interactivity of nowhere. Using your phone regularly used pagefile will lead to lags arising from slips past the cache - just imagine what will happen if an application tries to display one of their icons, and it will be in the swap, which will have to re-load the disc, after freeing up space by placing data swap another application. Horror.
Some users can be argued that in fact after the swap no problems, but for this we must thank the mechanism lowmemorykiller, which regularly kills very swollen and have not used the application. Thanks to him, the device with 1GB of memory can never reach the necessary performance data in a swap. He is the reason why, in contrast to the Linux-desktop on the Android swap is not needed.
Verdict: A very stupid idea, the implementation of which is fraught with serious lags.
zRAM
The idea is so right that even Google recommends zRAM for KitKat-based devices in the event that the amount of RAM is less than 512 MB. Only snag is that the method only works for modern cost devices based on multi-core processors from the budget any MTK and 512 MB of RAM. In this case, the encryption stream can be taken to separate the kernel and do not care about performance.
On older devices with a single core, and which recommend the use of this technology, we again get the lags, and in fairly large quantities. The same, incidentally, applies to technology KSM (Kernel Same Page Merging), which allows you to combine identical memory pages, thus freeing space. It also recommended that Google, but on older devaysakh leads to an even greater lags, which makes sense, given the constantly active core thread that runs continuously from memory in search of duplicate pages.
Verdict: it depends on the device, in most cases, the system slows down.
Seeder
At the time this application has done a lot of noise and gave rise to many analogues. The network has a huge number of reports of alleged phenomenal growth performance of the smartphone after installation. Homegrown custom firmware collectors have begun to include him in their assembly, and the author was declared a savior. And all this despite the fact that Seeder not doing anything dirty hacks, but just corrected a stupid bug Android.
The bug consisted in the fact that some high-level components of the Android runtime actively used the file /dev/random to get entropy. In some moments buffer /dev/random devastated, and the system is blocked until it is filling the required amount of data. And as he filled that have been reported from various sensors, buttons and sensors of the smartphone, the time for this procedure took so much time to notice that lag.
To solve this problem the author Seeder took Linux-demon rngd, compiled it for Android inastroil so that he took random data from a much faster (but also much more predictable) /dev/urandom, and every second merges them into /dev/random, without allowing the latter exhausted. As a result - the system never experienced a lack of entropy and work quietly.
This bug was closed back in Google Android 3.0, and it would seem, we do not need to think about Seeder. But the fact that the application has since actively developed and even today, is recommended by many "experts" for the application. Moreover, the application has several analogues (eg, sEFix), and many of the creators of the scripts/tools to accelerate still include this functionality in their creations. Sometimes it is the same rngd, sometimes - the demon haveged, sometimes just symlink /dev/urandom on /dev/random.
Everyone who tried it, excitedly shouting about the effectiveness of the solution, however, according to Ricardo Cerqueira from the company Cyanogen, in newer versions of Android /dev/random is used in all three components: libcrypto (encryption SSL-connections, generating SSH keys and etc.), wpa_supplicant/hostapd (to generate the WEP/WPA-keys) and several libraries for generating random ID to create ext2/3/4 filesystems.
Application Efficiency in today's Android, in his opinion, is not connected with the completion of the pool /dev/random, and that rngd constantly awakens the device and causes it to increase the frequency of the processor, which has a positive effect on performance and negative on the battery.
Verdict: The placebo effect.
Odex
Stock firmware smartphones always odex. This means that along with the standard package for Android apps in APK format directory /system/app/ and /system/priv-app/ (since KitKat) are also of the same name files with the extension odex. They contain so-called optimized bytecode applications already passed through the validator and optimizer virtual machine and recorded in a separate file (this is done using dexopt utility).
Meaning odex files to offload virtual machine and thus speed up the launch of the application (runoff). On the other hand, ODEX files to prevent modifications to make to the firmware, create problems with the update, and for this reason many custom ROMs (including CyanogenMod) distributed without them. Return (or rather, generate) files odex a variety of ways, including using simple tools/scripts like Odexer Tool. Using them is easy, and many of the "experts" are advised to do so.
The only problem is that this is purely a placebo. Not finding odex-files in the directory/system, the system itself will create them the next loaded and placed in the directory /system/dalvik-cache/. This is what she does when loading a new firmware on the screen the message "Busy ... Optimizing Applications." In relation to applications from the convenience store is also, incidentally, works. But at the stage of installation of the software.
Verdict: The placebo effect.
Lowmemorykiller tweaks
The implementation of multitasking in Android is very different from other mobile operating systems and is based on the classical model. Applications can work quietly in the background, in the system there are no restrictions on the number, the functionality of the transition to a background execution is not curtailed. All as on the desktop, except for one detail: the system has every right to kill any background application in the case of lack of memory, or (since KitKat) excessive greed application resources.
This mechanism, called lowmemorykiller, was coined to, retaining features of a full-fledged multi-tasking OS, Android could live normally in a limited amount of memory and lack of swap-partition. The user can easily launch any application and quickly switch between them, and the system will take care of the long-unused application completion and to always remain free memory in the device.
In the early days of Android purpose of this mechanism for many users it was unclear why have become popular so-called task-killer - apps that from time to time to wake up and have completed all background applications. Profits in this case, it was considered a large amount of free RAM, which was perceived as a plus, though no advantages in this, of course, was not. But there were many disadvantages in the form of a longer switch between applications, increased battery consumption and problems c awakening in the morning the owner (Service also kills).
Over time, multitasking principles of understanding has come, and from task-killers gradually abandoned. However, they were quickly replaced by another trend - tuning of lowmemorykiller mechanism (for example, by MinFreeManager applications). The main idea of ​​the method is to lift the boundaries of filling the RAM at which the system will start to kill background apps. A sort of the way ", and us and you", which allows you to free up some memory by regular means, without disturbing ideas Android multitasking.
But what it ultimately leads? For example, the standard value memory is full of borders - a 4, 8, 12, 24, 32 and 40 Mb, that is when the free storage space of 40 MB will be killed by one of the cached applications (loaded in memory, but is not running, is this optimization Android ), with 32 - Content Provider, has no customers, 24 - one of the seldom-used back-end application, then go to the expense of service processes application (for example, the music player service), visible on the app screen and the current running application. The difference between the last two is that the "current" - this application, which is currently dealing the user, and the "visible" - is that, for example, has a notification in the status bar or display on top of the screen any information.
In general, all this means that the smartphone will always be available 40 MB of memory, which is enough to accommodate another application, and then start LKM flow and begin cleanup. All OK, everyone is happy. The system uses the maximum memory. Now imagine what would happen if the user take advantage of advice homebrew "expert", and raise these values ​​so that the latter would be, well, let's say, 100 MB (usually raised only the last three values). In this case, it happens one simple thing: the user will lose 100 - 40 = 60 MB of memory. Instead of using this space to store back-end applications, it is useful, as it reduces the time to switch to them, and the charge of the battery, the system will keep it free is not clear why.
It is fair to say that the LKM tuning can be useful for devices with very very little memory (less than 512) and Android 4.X on board, or to temporarily increase thresholds. Some developers tweaks directly recommend the use of "aggressive" setting only if you run heavy software like hi-end games, and all the rest of the time to stay on the standard. This really makes sense.
Verdict: better not to touch.
I/O tweaks
The scripts that are published on the forums, you can often find tweaks I/O subsystem. For example, in the same script ThunderBolt! has the following lines:
Code:
echo 0 > $i/queue/rotational;
echo 1024 > $i/queue/nr_requests;
The first gives the I/O scheduler to understand that he is dealing with a solid state drive, the second increases the maximum size of the queue IO 128 to 1024 ($i variable in commands contains a path to the tree of block device in /sys, eg /sys/block/mmcblk0/, the script runs on them in the loop). Hereinafter you can find the following line relating to the CFQ scheduler:
Code:
echo 1 > $i/queue/iosched/back_seek_penalty;
echo 1 > $i/queue/iosched/low_latency;
echo 1 > $i/queue/iosched/slice_idle;
This is followed by a few more lines belonging to other planners (by the way, pay attention to a whole extra semicolon at the end of instruction). What all of these lines is not so? The first two commands are pointless for two reasons:
1. Schedulers I / O in a modern Linux kernel itself able to understand what type of storage medium they deal.
2. Such a long input-output queue (1024) completely useless on a smartphone. Moreover, it is meaningless, even on the desktop and is used on heavy duty servers (of tuning recommendations which it, apparently, and got into the script).
The last three are meaningless for the simple reason that for a smartphone, where there is virtually no separation applications prioritized in the input-output and there is no mechanical drive, the best planner - is the noop, that is simple the FIFO-queue - who first turned to memory, he also got access. And this scheduler is not any special settings. Therefore, all these multi-screen lists commands better replaced by a simple cycle:
Code:
for i in /sys/block/mmc*; do
echo noop > $i/queue/scheduler
echo 0 > $i/queue/iostats
done
In addition to enabling noop scheduler for all drives it off the accumulation of statistics I/O, which should also have a positive impact on the performance (although this is only a drop in the sea, which will be completely invisible).
Another tweak that can often be found in the scripts tuning performance - this increase readahead values ​​for memory cards up to 2 MB. readahead mechanism for early reading data from the media even before the application requests access to these data. If the kernel sees that someone long enough to read the data from the media, it is trying to figure out what data will be needed in the future application and pre-loads them into RAM, thereby enabling to reduce the time of their return.
It sounds cool, but, as practice shows, readahead algorithm is very often wrong, which leads to unnecessary operations of input-output and consumption of RAM. High values ​​readahead (1-8 MB) are recommended for use in RAID-arrays, whereas on the desktop or smartphone is better to leave everything as is, that is 128 KB.
Verdict: in addition to noop, do not need anything.
Tweaks virtual memory management system
In addition to the subsystem I/O, it is also common to do tuning virtual memory management subsystem. Often, change affects only two kernel variables: vm.dirty_background_ratio and vm.dirty_ratio, which allow you to adjust the size of the buffer for storing the so-called dirty data, ie the data that has been written to disk application, but more are still in memory and waiting until they are written to disk.
Typical values ​​of these variables in the desktop Linux-distributions and Android are as follows:
Code:
* vm.dirty_background_ratio = 10
* vm.dirty_ratio = 20
This means that when the "dirty" data buffer size in 10% of the total amount of RAM wake pdflush nuclear flow and starts to write data to disk. If the operation of recording data on the disk will be too intense and even though the job pdflush, the buffer will continue to grow, then when it reaches 20% of the amount of RAM the system switches all the subsequent write operation in synchronous mode (without pre-buffer) and the work of writing to disk application will be blocked until such time as the data is written to disk (in the terminology of Android is called a lag).
It is important to understand that even if the buffer size is not reached 10%, the system anyway pdflush starts the flow after 30 seconds. What we are given this knowledge? In fact, anything that we could use for their own purposes. The combination of 10/20% is quite reasonable, for example, on your smartphone with 1 GB of RAM is about 100/200 MB of memory, which is more than enough in terms of rare bursts records that speed is often below the speed record in system NAND-memory, or SD-card (when installing software or copying files from a computer). But the creators of scripts optimization with this, of course, disagree.
For example, in Xplix script can find something like this (in the original, they are much longer because of the checks on the amount of RAM and use BusyBox):
Code:
sysctl -w vm.dirty_background_ratio=50
sysctl -w vm.dirty_ratio=90
These commands apply to devices with 1 GB of memory, that is, set limits on "dirty" buffer equal to (approximately) 500/900 MB. These high values ​​are absolutely meaningless for the smartphone, as only work under constant intense recording on the disc, that is, besides for heavy server. In a situation with a smartphone they are no better than the standard. By the way, in the script ThunderBolt! used much more reasonable (and close to the standard) values, but I doubt that by their application the user will notice at least some difference:
Code:
if [ "$mem" -lt 524288 ];then
sysctl -w vm.dirty_background_ratio=15;
sysctl -w vm.dirty_ratio=30;
elif [ "$mem" -lt 1049776 ];then
sysctl -w vm.dirty_background_ratio=10;
sysctl -w vm.dirty_ratio=20;
else
sysctl -w vm.dirty_background_ratio=5;
sysctl -w vm.dirty_ratio=10;
fi;
The first two commands are run on smartphones with 512 MB of RAM, the second - with 1 GB, and others - with more than 1 GB. But in fact there is only one reason to change the default settings - a device with a very slow internal memory or memory card. In this case it is reasonable to spread the values ​​of the variables, that is, to make something like this:
Code:
sysctl -w vm.dirty_background_ratio=10
sysctl -w vm.dirty_ratio=60
Then, when a surge system write operations, without having to record data on the disc, up to the last will not switch to synchronous mode, which will allow applications to reduce lag when recording.
Verdict: better not to touch.
P.S.
There are numerous and smaller optimizations, including the "tuning" of the network stack, changing the variables of the Linux kernel and Android (build.prop), but 90% of them have no effect on the real performance of the device, while the remaining 10% or improve some aspects of behavior devices at the expense of others, or so insignificant increase productivity, you do not even notice it. From what really works, we can note the following:
****Acceleration. The small acceleration to improve performance and andervolting - save a little battery.
****Database Optimization. I seriously doubt that this will give a noticeable increase in speed, but the theory tells us that the work should be.
****Zipalign. Ironically, despite the built-in Android SDK feature content alignment within the APK-file in the store you can find a lot of software is not transmitted through zipalign.
****Disable unnecessary system services, removing unused system and seldom-used third-party applications.
****Custom kernel with optimizations for a specific device (again, not all nuclei are equally good).
****Already described I/O scheduler noop.
****Saturation algorithm TCP westwood +. There is evidence that he is in wireless networks more efficiently used in the default Android Cubic. Available in custom kernels.
Useless settings build.prop
LaraCraft304 from XDA Developers forum has conducted a study and found that an impressive number of /system/build.prop settings that are recommended for use "experts" do not exist in the source AOSP and CyanogenMod. Here's the list:
ro.ril.disable.power.collapse
ro.mot.eri.losalert.delay
ro.config.hw_fast_dormancy
ro.config.hw_power_saving
windowsmgr.max_events_per_sec
persist.cust.tel.eons
ro.max.fling_velocity
ro.min.fling_velocity
ro.kernel.checkjni
dalvik.vm.verify-bytecode
debug.performance.tuning
video.accelerate.hw
ro.media.dec.jpeg.memcap
ro.config.nocheckin
profiler.force_disable_ulog
profiler.force_disable_err_rpt
ersist.sys.shutdown.mode
ro.HOME_APP_ADJ
To be continued...
thanks man!
this helped me understand some things
"No, well, you can, of course, assume that the Linux kernel, Android and proprietary firmware for smartphones develop illiterate idiots, whose work must be radically alter"
this made me lol
This...
Was...
AWESOME!
Pure information furthermore useful bits and pieces. Sorry for the necro but I just felt plain wrong to tap a button to say thanks. I really felt like I'd be the minimum to write thanks personally. Thank you very much for this post.
I swear to you windowsmgr.max_events_per_sec works flawlessly, the higher number, the more content will be able to load on launcher without lag. I used to test it on my J7 pro android 8.0
I subscribe the OP's statements that we are constantly are searching for optimization(s) of our Android device. I used all kind of Android devices, like Sony, Oppo, etc for many years with varying satisfaction, also regarding Android updates. I know that what I'm going to say now, is very sensitive in the Android community, but nevertheless I will post it. After having used Fitbit as my fitness trackers/smartwatches, I noticed the release of the Apple iWatch 6 and I was stunned about the capabilities of this watch versus all other smarwatches. Alas I needed also an iPhone to get it up and running. Let me be clear, I never was an Apple fan due to it's too strict policy and the fact that you were not able to make your own homescreen. But, since the release of iOS 14 this all changed, so I decided to get over my Apple resistance and bought the Apple 12 Pro Max. Using this now for a few months, this is the best upgrade I ever made. Everything runs smooth, fast, lot's of space, smooth connection with the iWatch, etc etc. I designed my own home display even on an Apple device (see attachment). Looks exactly the same as the home display I had on all of my former Android devices. Hope this information is in some way helpfull for you all. Kindest regards kuzibri
P.S. If you've got interested in more information about the Apple 12, see my threads in the Apple fora of XDA:
1. https://forum.xda-developers.com/t/q-a-template-for-the-apple-iphone-12-pro-max.4322579/
2. https://forum.xda-developers.com/t/ask-away-thread-for-all-apple-iphones.4323471/

[DEV][ROM] Help with Building Vanilla Android 13 GSI with 16GB RAM and 24GB Swap!!

Hey XDA community!
(I wasn't sure what tags to type and where to post pardon me. I had read the posting guides...)
I'm reaching out to you all for help with building Vanilla Android 13 GSI on a system with limited RAM. Unfortunately, I'm unable to upgrade my RAM due to various constraints. Here's a breakdown of the issue I'm facing:
Memory Consumption and Terminal Closure: When I start the build process, everything seems to work fine initially. However, after approximately 3 minutes, the terminal abruptly closes itself. During this time, the memory consumption reaches its peak, utilizing nearly all available RAM. Next the terminal to close unexpectedly.
High RAM and CPU Usage: Throughout the brief duration that the build process runs, the RAM and CPU usage remain consistently high. This behavior contributes to the subsequent closure of the terminal.
Limited Swap Usage: Despite having a sizable swap space of 24GB, the swap usage remains within 7.5GB limit. It doesn't exceed this threshold and the terminal closure occurs.
Given the constraints preventing me from upgrading the RAM, I'm seeking your expertise to find alternative solutions or workarounds for this issue. I'm open to suggestions, such as optimizing the build environment, modifying specific configuration parameters, or implementing any strategies to stabilize the build process within the limited resources available.
Your valuable insights, experiences, and troubleshooting suggestions would be greatly appreciated. Together, let's explore different avenues and find a way to successfully build the Vanilla Android 13 GSI on this system configuration and if its useful to know I'm on Ubuntu 23.04 LTS and 8GBx2 DDR4 2400Mhz RAM.
Thank you for your support and contributions!
Best regards,
FiniteCode

Categories

Resources