Related
So with all the excitement going on in the page pool thread I was doing some external reading. One topic that caught my interest is NAND and NOR ROM.
I got a nice summary from here:
http://blogs.msdn.com/windowsmobile/archive/2005/08/19/453784.aspx
Basically, the synopsis is:
NOR: Faster to read, slower to write.
NAND: Slower to read, faster to write.
But, more importantly, NOR ROM let's you perform XIP operations. Now i remember seeing this XIP acronym before...in a directory when I extracted a ROM. This leads me to believe that at least part of the ROM in the Kaiser is NOR ROM.
XIP means eXecute In Place. It basically allows code to be executed directly from the ROM without first being copied into the RAM. This means less RAM utilization. As the article states i works for programs only, not user data files.
If we look in the XIP directory of an extracted ROM we see subdirectories like:
busenum.dll
diskcache.dll
imgfs.dll
These are things like low level bus, disk, and file system drivers. These things make perfect sense to XIP.
My question then is... if indeed we do have NOR ROM that can do XIP operations, how much is free on a typical ROM? AND can we cook in other applications into this XIP NOR location instead of into the NAND ROM and thus have those eXecute in Place and free up additional RAM?
It would be nice to get the 3% of my RAM back from Voice Command, or the 1% from PocketCM, etc, etc...
Could it be as easy as moving .dll files from \SYS to \ROM\XIP before cooking the ROM? I doubt it, but is it possible?
I'm just throwing a concept out there and asking about it's feasability. I'm not really a developer so I don't know how much further I can take this.
Thoughts?
bengalih said:
My question then is... if indeed we do have NOR ROM that can do XIP operations, how much is free on a typical ROM? AND can we cook in other applications into this XIP NOR location instead of into the NAND ROM and thus have those eXecute in Place and free up additional RAM?
It would be nice to get the 3% of my RAM back from Voice Command, or the 1% from PocketCM, etc, etc...
Could it be as easy as moving .dll files from \SYS to \ROM\XIP before cooking the ROM? I doubt it, but is it possible?
I'm just throwing a concept out there and asking about it's feasability. I'm not really a developer so I don't know how much further I can take this.
Click to expand...
Click to collapse
I don't think that blog applies to the Kaiser-like devices anymore. We already use all the ROM space we get. The partitioning between XIP and storage is out of the same space and controlled by how the ROM image is built and whatever they set the pagepool to.
Did you run out of space of the 100MB+ internal storage we all get?
I don't know why it wouldn't apply...it's not a WM5 vs WM6 issue.
Based on the only specs I have seen it lists the Kaiser with 256 ROM and 128 RAM.
The question is what type of ROM is it? Devices can mix NOR and NAND ROM. And based on what I see on the extracted ROM, at least some of it must be NOR (because of the existence of the XIP directories).
And this isn't a question of storage space. I am not trying to get more usable storage...I have a 8GB SDHC card for that. I am trying to maximize my available application space.
The point of my post is... can an application, let's say like Voice Command be moved from what might be the NAND portion of the ROM into the NOR portion of the RAM (from \SYS to \ROM\XIP). It would take up the same space of the TOTAL ROM, but when it executed it DOES NOT NEED BE LOADED INTO RAM and thus your available application space is not decreased.
Again, I am not a developer, so I may be way off in asking if this can be done. However your response is one that doesn't speak to the theory I am proposing.
if I remember right this was talked about before but on a different device I think the hermes. I am not sure of the reason it couldnt be done but I just remember it couldnt lol. Something about allocated memory maps
Some interesting things
I found this wile reading the link you posted. Some great information in there.
By the way, every SD and CF card is made out of NAND flash. So, no, you can't XIP programs stored on a storage card.
Click to expand...
Click to collapse
So am I reading this right. That any program we have stored on our Storage Card won't utilize the XIP
I've been working on & subsequently screwing up, page pool alteraitions for awhile. Because I'm messing with a Kaiser(CE5) & a UMPC (CE6) based device, I can tell you that while the pagepool will save you some seconds, mostly with the loading of contacts, your inbox, & the boot to os speed of the device, for the most part you're right, you won't see much difference.
However, with CE6, that will change. With CE 6 based devices you will be able to completely comtrol paging, be it XIP or Data (Read Only) paging. Maybe wm7 will introduce CE6 to PDA devices. With CE6, formerly & frequently confused with WM6, you will have 2 page pools & several controls over them including compaction.
The effectiveness of page pool sizes can vary widely depending on the types of processes & programs you use, but suffice to say, the average user will take little to no benefit from a larger paging file.
For all of you truly interested, there is a PB process file called DevHealth.exe, that can be used via SD card to report the actual status of the paging pool. Google it, you will find it. Kind of interesting to see what your device is actually doing before & after the changes.
AllTheWay said:
I found this wile reading the link you posted. Some great information in there.
So am I reading this right. That any program we have stored on our Storage Card won't utilize the XIP
Click to expand...
Click to collapse
No but what the OP is asking is if we can take apps and put them in the XIP section and run them from there. I think someone should try it out just cook up a rom with a complete app cooked in the XIP and see what happens. the worst is bad blocks i would guess but maybe POF or OLI should chime in on this one.
I do believe you can XIP from an SD card, I believe MS has done this with a few test devices that utilize under battery sd cards. I think it's not something they've done mostly because of problems in system stability when the SD is removed.
austinsnyc said:
No but what the OP is asking is if we can take apps and put them in the XIP section and run them from there. I think someone should try it out just cook up a rom with a complete app cooked in the XIP and see what happens. the worst is bad blocks i would guess but maybe POF or OLI should chime in on this one.
Click to expand...
Click to collapse
THANK YOU AUSTIN!
Yes, this is what I am saying. Forget about SD cards (which, according to what I have read are all NAND and thus can't XIP).
I am just asking if some of the applications that we are cooking into the flash (the things that extract to \SYS) like MS Voice Command, CM Contacts, QuickGPS, etc... if instead those things can be places in XIP.
Again, this is under the assumption (which is a big assumption) that what is in \XIP gets placed in the NOR ROM and what is in \SYS gets placed in the NAND.
What this would mean is that when you execute any of the programs I mention, like say Quick GPS you won't see the RAM utilization on your device go up, meaning you will have the same amount of available free memory. This is because of the XIP (based on the description I have read) it can be executed from the ROM without being copied into RAM.
Now, my guess is that even though the NOR has faster reads than NAND, it still might be slower than RAM. So, it might take another second to open Quick GPS. However for some apps I think I would prefer the slight delay in order for my available memory to be increased.
GSLEON3 said:
I've been working on & subsequently screwing up, page pool alteraitions for awhile. Because I'm messing with a Kaiser(CE5) & a UMPC (CE6) based device, I can tell you that while the pagepool will save you some seconds, mostly with the loading of contacts, your inbox, & the boot to os speed of the device, for the most part you're right, you won't see much difference....
For all of you truly interested, there is a PB process file called DevHealth.exe, that can be used via SD card to report the actual status of the paging pool. Google it, you will find it. Kind of interesting to see what your device is actually doing before & after the changes.
Click to expand...
Click to collapse
Good info Leon...but better to put this in the pagepool thread so we can discuss it there (and please do). What I'm trying to get at here is not directly related to pagepool sizes and speeds.
austinsnyc said:
if I remember right this was talked about before but on a different device I think the hermes. I am not sure of the reason it couldnt be done but I just remember it couldnt lol. Something about allocated memory maps
Click to expand...
Click to collapse
That's entirely possible. Again, I'm being the "idea guy" as I'm trying to synthesize some info I have absorbed. The most programming I do is high level scripting and VB so I don't know details about how this stuff would actually work down at the memory map level. There may be some other issues as well. I was hoping there was someone in these forums who actually had the knowledge (and wasn't just following cooking tutorials like most of us) of how this stuff truly interacts.
Also please reference my post in the pagepool thread, has some good thoughts (I think!):
http://forum.xda-developers.com/showpost.php?p=2101324&postcount=99
bengalih said:
http://forum.xda-developers.com/showpost.php?p=2101324&postcount=99
Click to expand...
Click to collapse
In addition to the above, I have found some more info which may partially defeat my reasoning:
XIPKernel
There are portions of the deepest parts of the OS that have to XIP. If you're on NOR, that code just XIPs like everything else. Not so on NAND. For a NAND system to boot, it needs to load this code into RAM first and then run it from there. When the system is running, it can't really tell if it's running from RAM or ROM, so it assumes it's running from ROM and doesn't count this space.
The XIPKernel region tends to be between 1.5 and 2M.
Click to expand...
Click to collapse
So it seems that just because we have a \XIP directory doesn't mean we have NOR ROM. It could just be the area the ROM places data that in NEEDS to XIP, and therefore means that it will get copied to RAM on boot (instead of just XIPing from where it is in ROM). This could also account for the additional "missing" RAM up to 128MB.
Therefore moving an application to this \XIP pre-cook doesn't mean that it will save us any RAM (again, assuming we DON'T have NOR ROM). It could however speed up that application since we are basically "pre-loading" it into RAM instead of waiting for the load to be user initiated.
I guess a question now is does all XIPed code run at the same time and is that all at boot? It does us no good to try and load up voice command (or QuickGPS, etc.) before the supporting code from the OS has loaded.
So, this may all be a wash, but it does help explain some of the interaction better and even if it leads no where, at least we will better understand how the devices memory system works.
Ok, I've seen numerous questions about the app called HTC Performance & have disassembled the executable. While my knowledge of these thing is by no means great, I have found some very interesting functions.
Maybe someone with more reverse engineering & code experience can take a look, but with IDA Pro there are some very interesting functions & strings.
Some of the calls & code are deprecated & no longer used in WM6 + but some of them are.
It is possible, especially for evB equiped roms, that this prgram acts like a server of sorts for some programs & processes. But being as it is initiated with Smartphone only functions I doubt it.
some of the more interesting functions in the HTC Performance app are:
SHInitExtraControls Which appears to be for Smartphone only
GetSystemMetrics WM6 Pro valid - Gets System Width & Heigth in pixels. Posible uses include program optimization based on the appropriate pixel returns
CreateMutexW - coredll - used to connect to core via net cf for obtaining device info- Usually eVB related apps use to call coredll info
memmove = takes more memory than memcpy but may be used to ensure unicode strings not used on odd memory addresses, this could increase speed on apps that incorrectly do this.
InterlockedCompareExchange, InterlockedDecrement, InterlockedExchange, InterlockedExchangeAdd, and InterlockedIncrement = functions provide a simple mechanism for synchronizing access to a variable that is shared by multiple threads. The threads of different processes can use this mechanism if the variable is in shared memory.
InterlockedCompareExchange = function performs an atomic comparison of the Destination value with the Comperand value. If the Destination value is equal to the Comperand value, the Exchange value is stored in the address specified by Destination. Otherwise, no operation is performed
YAXPAX = can speed up access of written C++ Code
ReleaseMutex = Mutex functions are used to release shared functions
EnumWindows = (..) to execute a task. EnumWindows (..) enumerates through all existing windows (visible or not) and provides a handle to all of the currently open top-level windows. Each time a window is located, the function calls the delegate (named IECallBack) which in turn calls the EnumWindowCallBack (..) function and passes the window handle to it. Not sure howthis is used though.
LoadAcceleratorsW = ??? Appears to be old CE function. Deprecated???
realloc = String Optimization
malloc = RAM Allocation
GetDeviceCaps = gets dev info, can be used to the optimize redraw based on device constraints already known
LocalReAlloc = This function changes the size or the attributes of a specified local memory object. The size can increase or decrease.
EnterCriticalSection = The threads of a single process can use a critical section object for mutual-exclusion synchronization. The process is responsible for allocating the memory used by a critical section object, which it can do by declaring a variable of type CRITICAL_SECTION. can grant exclusive access to memory
ReleaseDC = This function releases a device context (DC), freeing it for use by other applications. The effect of ReleaseDC depends on the type of device context.
Again, I am not a programmer, I know a few things, & am pretty competent with the lower operations of firmware, but the rest of the CE code is not my cup of tea. There are many more functions in HTC Performance. These are only a few functions found after a brief 20 minute peak.
But maybe, maybe, some of the function calls can help us to understand if this app can be moddified to properly function on the Kaiser.
It is possible that on some evB enabled apps, that maybe some of the HTC Performance app are retained & possibly function, that is pure speculation though, & again I doubt it.
Any CE code experts out there wanna take a look? I have, & based on what I've seen, I'll have to say FICTION!
Info
Hi,
Since I haven't really had time to see whats new and all I haven't the foggiest idea what HTC Performance is/what it is supposed to do.
But I can tell you that the functions you listed are not special in any way. Most of them would appear in every application that displays anything on the screen. For instance getting system metrics is required for any application displaying scroll bars, etc. All the interlocked and critical section stuff is just thread synchronization.
But that's OK, the use of windows APIs really doesn't mean much, other than the application runs on Windows...its the non-API stuff that defines an application. If the application you're looking at writes changes to registry keys, etc. you may want to look into that as those would be the lasting changes to the device.
Cheers,
Why is there concurrency related stuff in there? Surely that should all be handled by the operating system, rather than a running application? (That said, most of my concurrency knowledge is either theoretical or based at a high level, so I could be wrong here).
High Performance Cab
You can also check this thread...
http://forum.xda-developers.com/showthread.php?t=366792
Quentin- said:
Hi,
Since I haven't really had time to see whats new and all I haven't the foggiest idea what HTC Performance is/what it is supposed to do.
But I can tell you that the functions you listed are not special in any way. Most of them would appear in every application that displays anything on the screen. For instance getting system metrics is required for any application displaying scroll bars, etc. All the interlocked and critical section stuff is just thread synchronization.
But that's OK, the use of windows APIs really doesn't mean much, other than the application runs on Windows...its the non-API stuff that defines an application. If the application you're looking at writes changes to registry keys, etc. you may want to look into that as those would be the lasting changes to the device.
Cheers,
Click to expand...
Click to collapse
No, registry would not necesarily be the place to look. For this application the registry will only report whether or not the App is running or not. It is supposed to be a speed optimization application. My thought were that it could possibly be acting as a server of sorts, handling some thread optimization & resource allocation. Correct though, most of those API's are importing device info, beyond that, I am lost as to how it handles it, if it does at all. That said, there are many things that don't show up in the registry & many things can't be altered via the registry b'c they are set or handled before initialization or loading of the registry, possibly thru the OAL. Even tougher to say in a two chip device with as little known info as the msm7xxx processors. If anyone with real coding knowledge could take a look at the executable & see just what it's doing with the info, that would be great.
dperren said:
Why is there concurrency related stuff in there? Surely that should all be handled by the operating system, rather than a running application? (That said, most of my concurrency knowledge is either theoretical or based at a high level, so I could be wrong here).
Click to expand...
Click to collapse
That is indeed the center of my question & also what leads me to question how the app functions. Is it playing a role in thread priority optimization, & possibly redraw based on the polls, or is it just a partially gutted application miising a ton of registry data that never worked?
I made a kitchen that creates ROMs in both XPR and LZX compressions.
I'm also trying to port it to the Artemis, the Trinity, and the Hermes.
The way the kitchen work is:
Run "RunMe.bat"
Choose compression algorithm. (XPR or LZX)
Follow the normal Bepe's kitchen process.
Wait as the kitchen creates the ROM (like Bepe's kitchen, but with whatever compression you chose.)
The kitchen will automatically open up the imgfs.bin in a hex editor and automatically adjust it for the wanted compression before it builds the ROM.
It automatically inserts the proper XIP drivers.
It will automatically set the Pagepool to 4MB but give you the option to change it to something else as it does.
It then automatically creates the NBH and then finally launches whatever flasher (CustomerRUU, FlashCenter, or whatever your devices use) to flash the ROM.
For those who don't know what LZX compression is:
It's a compression algorithm that, although slower (by 1-4% in real life use) gives a good amount of free storage space. In some case (like in the Herald) it makes the ROM so small that it has to be flashed through an SD card due to the Herald's flashing size requirements. On an average 50mb ROM, it takes off about 10mb. The actual cooking itself does take a LOT more CPU and RAM to do in your PC, though. Especially the RAM. (It's because the tools that actually do the compression weren't really optimized for the job.)
Anyhow, let me know if you want it.
I would love to use it if you're willing to share. I really want to get into cooking. I just don't have time these days.
Here is a bit that one of my buddies wrote on compression, for those interested. He's actually discussing LZMA (which is what 7zip uses by default), which is similar to the LZ1 dictionary (the same dictionary used in LZX compression algorithms). This should help people to understand what settings to use depending on the situation.
Hehe. I've long been fascinated by compression algorithms ever since studying the original Lempel-Ziv algorithm some years ago.
The key here is the dictionary, which, in a simplified nutshell, stores patterns that have been encountered. You encounter pattern A, compress/encode it, and then the next time you see pattern A, since it's already in the dictionary, you can just use that. With a small dictionary, what happens is that you'll encounter pattern A, then pattern B, C, D, etc., and by the time you encounter pattern A again, it's been pushed out of the dictionary so that you can't re-use it, and thus you take a hit on your size. All compression algorithms basically work on eliminating repetition and patterns, so being able to recognize them is vital, and for dictionary-based algorithms (which comprise most mainstream general-purpose algorithms), a small dictionary forces you to forget what you saw earlier, thus hurting your ability to recognize those patterns and repetitions; small dictionary == amnesia.
Solid compression is also important because without it, you are using a separate dictionary for every file. Compress file 1, reset dictionary, compress file 2, reset dictionary, etc. In other words, regular non-solid compression == amnesia. With solid compression, you treat all the files as one big file and never reset your dictionary. The downside is that it's harder to get to a file in the middle or end of the archive, as it means that you must decompress everything before it to reconstruct the dictionary, and it also means that any damage to the archive will affect every file beyond the point of damage. The first problem is not an issue if you are extracting the whole archive anyway, and the second issue is really only a problem if you are using it for archival backup and can be mitigated with Reed-Solomon (WinRAR's recovery data, or something external, like PAR2 files). Solid archiving is very, very important if you have lots of files that are similar, as is the case for DirectX runtimes (since you have a dozen or so versions of what is basically the same DLL). For example, when I was compressing a few versions of a DVD drive's firmware some years ago, the first file in the archive was compressed to about 40% or so of the original size, but every subsequent file was compressed to less than 0.1% of their original size, since they were virtually duplicates of the first file (with only some minor differences). Of course, with a bunch of diverse files, solid archiving won't help as much; the more similar the files are, the more important solid archiving becomes.
The dictionary is what really makes 7-Zip's LZMA so powerful. DEFLATE (used for Zip) uses a 32KB dictionary, and coupled with the lack of solid archiving, Zip has the compression equivalent of Alzheimer's. WinRAR's maximum dictionary size is 4MB. LZMA's maximum dictionary size is 4GB (though in practice, anything beyond 128MB is pretty unwieldy and 64MB is the most you can select from the GUI; plus, it doesn't make sense to use a dictionary larger than the size of your uncompressed data, and 7-Zip will automatically adjust the dictionary size down if the uncompressed data isn't that big). Take away the dictionary size and the solid compression, and LZMA loses a lot of its edge.
In the case of the DirectX runtimes, because of their repetitive nature and the large size of the uncompressed data, a large dictionary and solid compression really shines here, much more so than it would in other scenarios. My command line parameters set the main stream dictionary to 128MB, and the other much smaller streams to 16MB. For the 32-bit runtimes, where the total uncompressed size is less than 64MB, this isn't any different than the Ultra setting in the GUI, and 7-Zip will even reduce the dictionary down to 64MB. For the 64-bit runtimes, the extra dictionary size has only a modest effect because 64MB is already pretty big, and also because the data with the most similarity are already placed close to each other (by the ordering of the files). The other parameters (fast bytes and search cycles) basically makes the CPU work harder and look more closely when searching for patterns, but their effect is also somewhat limited. In all, these particular parameters are only slightly better than Ultra from the GUI, but I much rather prefer running a script than having to wade through a GUI (just as I prefer unattended installs over wading through installs manually).
Oh, and another caveat: Ultra from the GUI will apply the BCJ2 filter to anything that it thinks is an executable. This command line will apply it to everything. Which means that this command line should not be used if you are doing an archive with lots of non-executable code (in this case, the only non-executable is a tiny INF file, so it's okay). This also means that this command line will perform much better with files that don't get auto-detected (for example, *.ax files are executable, but are not recognized as such by 7-Zip, so an archive with a bunch of .ax files will do noticeably better with these parameters than with GUI-Ultra). If you want to use a 7z CLI for unattended compression but would prefer that 7-Zip auto-select BCJ2 for appropriate files, use -m0=LZMA:d27:fb=128:mc=256 (which is also much shorter than that big long line; for files that do get BCJ2'ed, it will just use the default settings for the minor streams).
dumpydooby said:
I would love to use it if you're willing to share. I really want to get into cooking. I just don't have time these days.
Here is a bit that one of my buddies wrote on compression, for those interested. He's actually discussing LZMA (which is what 7zip uses by default), which is similar to the LZ1 dictionary (the same dictionary used in LZX compression algorithms). This should help people to understand what settings to use depending on the situation.
Hehe. I've long been fascinated by compression algorithms ever since studying the original Lempel-Ziv algorithm some years ago.
The key here is the dictionary, which, in a simplified nutshell, stores patterns that have been encountered. You encounter pattern A, compress/encode it, and then the next time you see pattern A, since it's already in the dictionary, you can just use that. With a small dictionary, what happens is that you'll encounter pattern A, then pattern B, C, D, etc., and by the time you encounter pattern A again, it's been pushed out of the dictionary so that you can't re-use it, and thus you take a hit on your size. All compression algorithms basically work on eliminating repetition and patterns, so being able to recognize them is vital, and for dictionary-based algorithms (which comprise most mainstream general-purpose algorithms), a small dictionary forces you to forget what you saw earlier, thus hurting your ability to recognize those patterns and repetitions; small dictionary == amnesia.
Solid compression is also important because without it, you are using a separate dictionary for every file. Compress file 1, reset dictionary, compress file 2, reset dictionary, etc. In other words, regular non-solid compression == amnesia. With solid compression, you treat all the files as one big file and never reset your dictionary. The downside is that it's harder to get to a file in the middle or end of the archive, as it means that you must decompress everything before it to reconstruct the dictionary, and it also means that any damage to the archive will affect every file beyond the point of damage. The first problem is not an issue if you are extracting the whole archive anyway, and the second issue is really only a problem if you are using it for archival backup and can be mitigated with Reed-Solomon (WinRAR's recovery data, or something external, like PAR2 files). Solid archiving is very, very important if you have lots of files that are similar, as is the case for DirectX runtimes (since you have a dozen or so versions of what is basically the same DLL). For example, when I was compressing a few versions of a DVD drive's firmware some years ago, the first file in the archive was compressed to about 40% or so of the original size, but every subsequent file was compressed to less than 0.1% of their original size, since they were virtually duplicates of the first file (with only some minor differences). Of course, with a bunch of diverse files, solid archiving won't help as much; the more similar the files are, the more important solid archiving becomes.
The dictionary is what really makes 7-Zip's LZMA so powerful. DEFLATE (used for Zip) uses a 32KB dictionary, and coupled with the lack of solid archiving, Zip has the compression equivalent of Alzheimer's. WinRAR's maximum dictionary size is 4MB. LZMA's maximum dictionary size is 4GB (though in practice, anything beyond 128MB is pretty unwieldy and 64MB is the most you can select from the GUI; plus, it doesn't make sense to use a dictionary larger than the size of your uncompressed data, and 7-Zip will automatically adjust the dictionary size down if the uncompressed data isn't that big). Take away the dictionary size and the solid compression, and LZMA loses a lot of its edge.
In the case of the DirectX runtimes, because of their repetitive nature and the large size of the uncompressed data, a large dictionary and solid compression really shines here, much more so than it would in other scenarios. My command line parameters set the main stream dictionary to 128MB, and the other much smaller streams to 16MB. For the 32-bit runtimes, where the total uncompressed size is less than 64MB, this isn't any different than the Ultra setting in the GUI, and 7-Zip will even reduce the dictionary down to 64MB. For the 64-bit runtimes, the extra dictionary size has only a modest effect because 64MB is already pretty big, and also because the data with the most similarity are already placed close to each other (by the ordering of the files). The other parameters (fast bytes and search cycles) basically makes the CPU work harder and look more closely when searching for patterns, but their effect is also somewhat limited. In all, these particular parameters are only slightly better than Ultra from the GUI, but I much rather prefer running a script than having to wade through a GUI (just as I prefer unattended installs over wading through installs manually).
Oh, and another caveat: Ultra from the GUI will apply the BCJ2 filter to anything that it thinks is an executable. This command line will apply it to everything. Which means that this command line should not be used if you are doing an archive with lots of non-executable code (in this case, the only non-executable is a tiny INF file, so it's okay). This also means that this command line will perform much better with files that don't get auto-detected (for example, *.ax files are executable, but are not recognized as such by 7-Zip, so an archive with a bunch of .ax files will do noticeably better with these parameters than with GUI-Ultra). If you want to use a 7z CLI for unattended compression but would prefer that 7-Zip auto-select BCJ2 for appropriate files, use -m0=LZMA:d27:fb=128:mc=256 (which is also much shorter than that big long line; for files that do get BCJ2'ed, it will just use the default settings for the minor streams).
Click to expand...
Click to collapse
You have no idea how much the geek in me loved this post. ^_^ I have a thing about compression, too, but I'm just not very well versed in it. I only know the basic details of compression.
Answered 2 of my questions I tried finding out the other day. Thank you guys. I learn so much here.
Hoping some of you technical guys can help with this question.
I own an Inspire/DesireHD and was trying to understand how Android memory space works. Is all the device memory one pool and the ROMs partition it as needed, expanding to whatever size they require and leaving the remainder for cache/dalvik/RAM?
The reason I'm asking is because I'm running RCMix Kingdom .2 and it's larger than Capy's 6.1 version. Obviously this is attributable to Sense v.3's larger footprint. To compensate, I would be perfectly happy to remove some programs I never use like Teeter and Polaris Office since I own DocumentsToGo.
What I'm getting at is that if I wanted to reduce the ROM footprint to return more memory to user space for runtime and programs, how would one go about doing so?
Thanks in advance!
Well sadly i don't have an answer for that question yet...
I'm trying to think of a way to put KSM to the test on my android device.
As far as i understand it is possible that the kernel actually causes high CPU usage trying to map and unmap memory pages over and over again.
This issue is known for linux and other virtual machines so it is possible that the Same effect will be on the android vm
Testings that i found are not relevant to android.
For example:
The result is a dramatic decrease in memory usage in virtualization environments. In a virtualization server, Red Hat found that thanks to KSM, KVM can run as many as 52 Windows XP VMs with 1 GB of RAM each on a server with just 16 GB of RAM. Because KSM works transparently to userspace apps, it can be adopted very easily, and provides huge memory savings for free to current production systems. It was originally developed for use with KVM, but it can be also used with any other virtualization system - or even in non virtualization workloads, for example applications that for some reason have several processes using lots of memory that could be shared.
Click to expand...
Click to collapse
http://kernelnewbies.org/Linux_2_6_32
What i would really want to know is what would happen if each of these VMs Would run a different application/game/audio/graphics software at the same time ? or what if the same vm will run many different apps ? and also to compare cpu usage with and without KSM
Guess i'll need a tool for that. something like 'iostat' but for memory diagnostic and another tool to see a per process CPU usage but 'top' is not good enough for that.
Any way, the best test should present clear results with precised data.
I'll keep looking for legit way to put it to the test.
If you can think of a way to test KSM with android, please let me know.
This is a technique that relates mostly to processes like virtualisation. For example, when you load 5 windows XP VMs, you'll have a good 10 - 20 services that are practically the same in memory in each VM. Instead of each service using 10mb (ie, 10mb x 5 = 50mb), you only need say 15 or 20mb using KSM. If you use different applications, it is very unlikely that anything would be saved FOR THAT APPLICATION. However, the main elements of a Windows XP System would still be there (drivers, explorer, firewall, logon, search and so on). Means little in one setup, but when you have several VMs it is shown to be a huge advantage. As we know a simple XP install can use 500mb of RAM actively, and this is fairly uniform across instals.
With android, i don't know if there are specific RAM savings to be had. Don't know enough about the inner workings and the sandbox android puts its apps in or how apps interact with system services. Sadly, i can't think of a good way to test it out either, but i'll be keeping an eye on this topic for someone (much) more knowledgeable to come along.
Harbb said:
Sadly, i can't think of a good way to test it out either, but i'll be keeping an eye on this topic for someone (much) more knowledgeable to come along.
Click to expand...
Click to collapse
Enter bedalus, stands there with a vacant expression on his face. Harbb looks disappointed.
kernels ; battery ; ROM ; gov/sched
That entire paragraph was dedicated to you bedalus, we both know that.
Lol
I hope someone can answer this though.
kernels ; battery ; ROM ; gov/sched
Wait for someone............
Sent from my Nexus S using xda premium
KSM does not improve performance on Android just like that - all enabling KSM does, is enable SUPPORT for the Feature but Applications would have to make use of the feature, which they don't.
You can easily verify this like that :
echo 1 > /sys/kernel/mm/ksm/run
<wait and/or run the Applications of your choice>
cat /sys/kernel/mm/ksm/pages_sharing
IF the above shows a value > 0 then you are making use of KSM else it's just available, without anyone using the feature.
Here's an interesting Article that gives a little more insight :
http://www.linux-kvm.com/content/using-ksm-kernel-samepage-merging-kvm
By the way, the same is true for ZCACHE. If you really want to make better use of your Memory (RAM) then using ZRAM as a Swapdevice does work (and may often make sense, too).
That all said : There appear to be efforts to make use of KSM http://forum.xda-developers.com/showthread.php?t=1464758 - so things may well change ...
any update on this...?