Presenting bmlunlock. Unlocks bml7 for writing to it via dd.
http://github.com/CyanogenMod/android_device_samsung_bmlunlock
What is this for?
Sent from my SPH-D700 using Tapatalk
plmiller0905 said:
What is this for?
Sent from my SPH-D700 using Tapatalk
Click to expand...
Click to collapse
"After running bmlunlock on the samsung device, one can flash the kernel using the following command:
dd if=/sdcard/zImage of=/dev/block/bml7 bs=4096"
How do you use this?
Sent from my SPH-D700 using Tapatalk
Wait for it to get packaged into a oneclick installer
DanDroidOS said:
"After running bmlunlock on the samsung device, one can flash the kernel using the following command:
dd if=/sdcard/zImage of=/dev/block/bml7 bs=4096"
Click to expand...
Click to collapse
It looks like redbend_ua does 256kB writes, not 4kB writes. 256kB presumably corresponds to the erase block size of the NAND chip. So unless the Linux page cache does a good job coalescing the writes, or the BML driver handles this internally, I would imagine dd here does a read-modify-write operation for each page. In other words, this may potentially burn through flash erase cycles 64x faster than necessary. No?
It would probably be worth stracing redbend_ua to see its exact write behavior. I would't be surprised if it uses O_DIRECT and 256kB write sizes except for the final block.
No, it doesn't. Run an strace against redbend_ua and you'll see it reads 256k at a time, and then does 64 4k write calls. (Coincidentally how I also figured out how redbend works)
You may be right, and that may not be the erase block size. However, but it is the PAGE_SIZE. IIRC, you should generally write in PAGE_SIZE increments, otherwise you risk getting non-contiguous memory, which may cause flashing to fail. At least this is my experience on HTC devices.
Though it is using an ioctl to do the write. So who knows what magic is happening there. I could reverse engineer a "bmlwrite" type thing as well, but I am not too motivated. This should be enough to get people down the right path.
strace:
http://pastebin.com/di6kLXkB
Koush said:
No, it doesn't. Run an strace against redbend_ua and you'll see it reads 256k at a time, and then does 64 4k write calls. (Coincidentally how I also figured out how redbend works)
You may be right, and that may not be the erase block size. However, but it is the PAGE_SIZE. IIRC, you should generally write in PAGE_SIZE increments, otherwise you risk getting non-contiguous memory, which may cause flashing to fail. At least this is my experience on HTC devices.
Though it is using an ioctl to do the write. So who knows what magic is happening there. I could reverse engineer a "bmlwrite" type thing as well, but I am not too motivated. This should be enough to get people down the right path.
strace:
http://pastebin.com/di6kLXkB
Click to expand...
Click to collapse
How much beer would it take to motivate you? lol
Koush said:
IIRC, you should generally write in PAGE_SIZE increments, otherwise you risk getting non-contiguous memory, which may cause flashing to fail. At least this is my experience on HTC devices.
Click to expand...
Click to collapse
In this case it probably doesn't matter, it's not direct I/O, just dirty pages hitting the page cache. Linux should coalesce them before flushing them out, and hopefully the FSR driver will minimize block erases. My thinking was that using direct I/O with a 256kB write size should serve as a "strong hint" to the FSR driver perform only a single erase per block. Physically contiguous memory shouldn't matter unless the driver actually does DMA from the userspace buffer. But yes, it's hard to get 256kB of physically contiguous pages.
Koush said:
Though it is using an ioctl to do the write. So who knows what magic is happening there.
Click to expand...
Click to collapse
It's making BML_RESTORE ioctls to the FSR driver, which is sadly proprietary so we can't see what's going on in there. But since it's a restore function, presumably the driver issues block erases whenever there's a 4k write at the start of an erase block. Since partitions are aligned to (256 kB) erase block boundaries anyways, it's not like it would ever "erase too much".
Koush said:
I could reverse engineer a "bmlwrite" type thing as well, but I am not too motivated.
Click to expand...
Click to collapse
Thanks for the strace. Since we're lack the FSR sources, it's going to take poking at redbend more to figure out the BML_RESTORE interface. Odd that it reopens the device and calls BML_UNLOCK_ALL on every block write though.
Can other bml partitions be unlocked using the same principle?
Related
Hi,
I contacted VS support about a problem described here:
http://forum.xda-developers.com/showthread.php?t=933128
They said that they'd rma it, so I should send it to them. They said turnaround was 10-14 days.
I was wondering if anyone has recent experience w them? Are they turning these around in that timeframe, and are the repairs effective?
I can probably be w/o my Gtab, but I'd hate to wait that long and get it back w the same problem!
Thanks,
Jim
Ask if they cross ship?
Did ask, but it wasn't an option.
Jim
Let us know how your experience is. Looks like you get to be the RMA guinea pig.
edit- I'm guessing the RMA is for the flash?
muqali said:
Let us know how your experience is. Looks like you get to be the RMA guinea pig.
edit- I'm guessing the RMA is for the flash?
Click to expand...
Click to collapse
Yes, because of the bad blocks. I'm still debating tho, bcuz I don't have 100% confirmation that the dmesg msg and the inability to nvflash --read partition 11 are correlated, plus my Gtab seems to be running fine so far, other than that ..
Jim
bad blocks in dmesg
So I checked your other thread and decided to dig thru my dmesg since I recently reflashed. I see bad blocks as well allthough most of mine are in cache. I can post those later if you like.
I also have no problems with my tab on vegan but I'm currently running gojimis cm7 so any issues I've been attributing to that.
Could this possibly be the nand controller marking known bad blocks so the os knows not to use them? Kinda like how ssds have an extra amount built in to compensate for wear and
such?
Maybe someone else could check their dmesg and see if they show any errors.
I think later ill try to emulate what you did with nvflash and see what I get.
Nosunshine said:
So I checked your other thread and decided to dig thru my dmesg since I recently reflashed. I see bad blocks as well allthough most of mine are in cache. I can post those later if you like.
I also have no problems with my tab on vegan but I'm currently running gojimis cm7 so any issues I've been attributing to that.
Could this possibly be the nand controller marking known bad blocks so the os knows not to use them? Kinda like how ssds have an extra amount built in to compensate for wear and
such?
Maybe someone else could check their dmesg and see if they show any errors.
I think later ill try to emulate what you did with nvflash and see what I get.
Click to expand...
Click to collapse
My theory is that there are multiple levels of operation, e.g. in Android, the OS has loaded and is looking at the physical media as a yaffs partition, but before that, i.e., when the boot loader is running, it sees the raw media. I don't know enough about Android to know if the bad blocks are visible and at which levels.
Yes, if you have a chance to try nvflash, please post. What I'd be interested in is if you could do --getpartitiontable to get the list of partitions, figure out the part # for CAC, then do --read on that partition, to see if you get read failure.
If you do, you'd be the 3rd person (tcrews and myself), so that would be more evidence that dmesg bad block mags == bad --read.
I have another theory, that maybe some of the 'mysterious' problems that people see like magic # mismatch, etc., may be related.
Jim
Well if im not mistaken cm7 uses ext4 instead of yaffs for atleast the system partition, if thats true then what were seeing is most likely related directly to the hardware.
I'd almost be willing to bet this is the nand controller driver marking blocks that the os cant use, but if its causing problems now or down the road, I don't know.
Its strange that it shows up in the logs if thats the case. I would think this would be obscured from the user and handled by the controller at a hardware level.
I'll post my logs and nvflash findings in your other thread as to not muck this one up to bad, should have them up by tomorrow.
It is why I had earlier wondered if the standard linux command badblocks was available to use before you ran an mkfs. Badblocks will make a log of all the badblocks on a device and that log can be passed to mkfs so that instead of having to wait until you run mkfs to check for bad blocks, you can have already done it(and hopefully gotten your data off too).
http://www.yaffs.net/yaffs-talk-slide-16-ecc
In reading that slide from their site it seems that either the MTD/device or YAFFS can do the ECC. What you're seeing might be the hardware reporting to the YAFFS driver the blocks that are bad so it can do whatever it needs to do to work with it. Of course any disk utility that thinks a reported bad block is a problem will choke.
I'm not aware of any utilities that are like dd but work after the filesystem driver and not on the device itself, but it is quite possible they exist. It is also possible I am completely off base here.
muqali said:
It is why I had earlier wondered if the standard linux command badblocks was available to use before you ran an mkfs. Badblocks will make a log of all the badblocks on a device and that log can be passed to mkfs so that instead of having to wait until you run mkfs to check for bad blocks, you can have already done it(and hopefully gotten your data off too).
http://www.yaffs.net/yaffs-talk-slide-16-ecc
In reading that slide from their site it seems that either the MTD/device or YAFFS can do the ECC. What you're seeing might be the hardware reporting to the YAFFS driver the blocks that are bad so it can do whatever it needs to do to work with it. Of course any disk utility that thinks a reported bad block is a problem will choke.
I'm not aware of any utilities that are like dd but work after the filesystem driver and not on the device itself, but it is quite possible they exist. It is also possible I am completely off base here.
Click to expand...
Click to collapse
I know you suggested running badblocks, but I haven't figured out HOW to do that yet (what command?). Remember that on stock TNT /system is yaffs, and I don't think there's an mkfs.yaffs?
Jim
Oh. badblocks is the command. The syntax you'd want to google the man page. I dont think it is part of busybox, but might ne on the system. If not, the source code is easily compiled.
Nosunshine said:
Well if im not mistaken cm7 uses ext4 instead of yaffs for atleast the system partition, if thats true then what were seeing is most likely related directly to the hardware.
I'd almost be willing to bet this is the nand controller driver marking blocks that the os cant use, but if its causing problems now or down the road, I don't know.
Its strange that it shows up in the logs if thats the case. I would think this would be obscured from the user and handled by the controller at a hardware level.
I'll post my logs and nvflash findings in your other thread as to not muck this one up to bad, should have them up by tomorrow.
Click to expand...
Click to collapse
The thing is, remember I get the failed --read when I do the nvflash --read. At that point, the only things involved are the boot loader code on the gtab end, and nvflash on the pc end, plus I get the same read fail at the same byte count w different nvflash versions (bekit's and nvidia sdk) and both on windows and Linux.
The point is that there is no filesystem involved when the --read fails, so other than some bug in multiple versions of nvflash, or some bug in the boot loader, it has to be a hardware (nand) problem.
Jim
muqali said:
Oh. badblocks is the command. The syntax you'd want to google the man page. I dont think it is part of busybox, but might ne on the system. If not, the source code is easily compiled.
Click to expand...
Click to collapse
Thanks. If 'badblocks' is literally the command name then I don't have it/can't find it, either in TNT or when running CWM (and adb'ing).
If you have a binary that'll work, can you post or PM?
Jim
Yes, a hardware problem was what I was getting at. Usually nand has in hardware remapping but i was thinking since yaffs can also handle it(and all nand has bad blocks), that maybe it was designed to not remap bad blocks. Since nvflash is reading the raw device and not from the yaffs it sees what yaffs itself corrects for when in place. Not sure if i make sense.
Nosunshine said:
Well if im not mistaken cm7 uses ext4 instead of yaffs for atleast the system partition, if thats true then what were seeing is most likely related directly to the hardware.
I'd almost be willing to bet this is the nand controller driver marking blocks that the os cant use, but if its causing problems now or down the road, I don't know.
Its strange that it shows up in the logs if thats the case. I would think this would be obscured from the user and handled by the controller at a hardware level.
I'll post my logs and nvflash findings in your other thread as to not muck this one up to bad, should have them up by tomorrow.
Click to expand...
Click to collapse
You may already know, but you can check what kind of filesystem is used for /system. Just terminal or adb, then type 'mount'. That'll list all the mounts, and show what kind of filesystem is used for each.
Jim
muqali said:
Yes, a hardware problem was what I was getting at. Usually nand has in hardware remapping but i was thinking since yaffs can also handle it(and all nand has bad blocks), that maybe it was designed to not remap bad blocks. Since nvflash is reading the raw device and not from the yaffs it sees what yaffs itself corrects for when in place. Not sure if i make sense.
Click to expand...
Click to collapse
Unfortunately, that makes perfect sense, I.e., that's what I think is going on.
Jim
Badblocks
Yea busybox doesn't have a badblocks, and the android version of busybox has even less apps. I think you can build a version of it from source to add the extra functionality but idk.
I also did a little hunting and found that if i run this from terminal find | grep nand that there are what looks to be drivers for the controller in /sys/devices/platform/tegra_nand and /sys/bus/platform/drivers/tegra_nand. So I think this is definitely the controller telling android what it can't write to because of the bad blocks.
The question is what does this mean? Also go check the other thread im about to post my dmesg.
jimcpl said:
You may already know, but you can check what kind of filesystem is used for /system. Just terminal or adb, then type 'mount'. That'll list all the mounts, and show what kind of filesystem is used for each.
Jim
Click to expand...
Click to collapse
I did not know that, thanks for pointing it out!
Yea it looks like I was wrong, /system and /cache are yaffs2 and /data is ext3.
Sorry for the confusion I was sure I read somewhere that gingerbread was going to be ext4 but maybe cm7 hasnt made it that far yet.
Context/Resume:
-I´m changing the android platform to create two partitions in a SD Card. I need to do this as early as possible. I´m currently trying at init.rc
-It would be nice to obfuscate the access to one of the partitions. If i could keep ithidden would be better
And...the long story:
I´m trying to create new partitions in a sdcard in a device, and i need to be done as early as possible. I thought that the init.rc should be the best location for this, so i tried to add a script call to perform the task, but i´m unable to create these partitions ( or get information of the reason of fail ) First of all, is this premiss valid? Should i be able to do this?
I call the script by:
service myscript /system/bin/logwrapper /system/bin/myscript.sh
disabled
oneshot
at init-time
And the content´s of the .sh file is
fdisk /dev/sdcard < mykeys.input
where "mykeys.input" is the sequence os keys used to perform the taks of create the partitions.
Well, is this the recommended way to do this?
thanks!
Not to sure bout the boot order and its effect on what you are trying to achieve. What phone and os? You might want to look at your phones logs to see in what order what/which/where is going on when. As that could explain your issues a bit more clearly and possibly even provide the mystery errors. If not try running it in an emulator where you can make the boot up verbose and boot log it.
As for hiding the partition have you tried formatting the sdcard outside the phone environment. The hiding ability should be able to be gotten if you were to format in gparted I.e. Format it how you want size wise and format wise, then for the partition you want hidden flag it lba. Not sure if lba hides from your phone or not, but worth a shot.
*edit* what are you trying to achieve again? Dual booting os on phone? If so, I would take a look more towards the /dev/loop and chmod approach. Also keep in mind if this is what you are aiming at you might want to make 3 partitions as a swap partition would be beneficial.
Sent while wearin my foam helmet ridin the short bus.
Hy blackadept:
What i want: Ensure that there is two partitions in a sdcard as early as possible. ( i must create them if needed ). My focus now is in how to create. The logic of "if/when" todo a will deal later.
Why i want this: Project requirement. Not negotiable.
What will be stored: Part1 : User acessible. Nothinf special. Part2: Special Data user by an apk.
"Phone model": It´s a tablet. STI´s tablet. Android 2.2
Well the partitions will be there just from formatting the card, as to whether or not the init sees that I am not sure. I'm one of them poor simple folk who ain't got no money....aka I don't have the fun fancy toys like a tablet. haha. Only reason I bring that up is for the fact that being as I am not around them, literally haven't even seen one let alone hold lmao, not sure how it's boot up goes.
Have you tried creating a partition and formatting it with various flags such as lba to hide it from the OS? If we are talking small sizes here then I'd think you could hide it within such a flagged partition surrounded by fluff. Throw some encryption into the mix and your gtg. At that point all that's left for you to do would be writing script to navigate the maze and unlock it. If that would work then it would be a fairly easy out.
Otherwise we could go back to the "dual booting" that I brought up in the last post. Being as my phone can't mnt to bin *droid x.....hate u Motorola* I have done all of my dual and triple boots via looping thru /dev. This could work for you as well, tho again I'm not familiar with the tablets. If you did that tho..... well you could hide it in a myriad of ways.... flags, encryption, straight up "Where's Waldo" type shenanigans....
Have you ever put an ARM OS onto an android device before? If so, maybe give it a shot and let me know? Only question I'm wondering, tho, is android's ability to see the flag and be able to handle it. Also as to the level of root that particular device has (regular not-so-super user like my phone or is it completely unlockable?) would determine a game plan too in a way. If you have full access then you could just format the card thrice (sorry always wanted to use that in a sentence and feel all smert), making a special ext3 partition with the flags or encryption, make note in the root mnt's of it's existence thru your init script (tho just giving physical note to it.... not size or content). Write your .apk or specialized script with the UUID or GUID or w/e the *beep* android uses this week, and again you win at android....
Sorry for the long winded verbal response....lately I always seem to post when I ain't slept for 2-3 days as opposed to when I ain't delirious...
So everyone (including me) has noticed that the transformer slows down when doing i/o. I originally thought this was a hardware issue (slow memory? slow bus?) but from various threads it sounds like third party os fix the issue. So I have two questions:
Can someone explain what the asus kernel does wrong (or how third party kernels) fix the issue ?
Why asus cannot copy the fixes from third party kernels into their kernel (I presume this is a kernel issue and not support software around the kernel but maybe that presumption is incorrect; maybe it is a driver issue or maybe there really is a hardware issue?)
jake21 said:
So everyone (including me) has noticed that the transformer slows down when doing i/o. I originally thought this was a hardware issue (slow memory? slow bus?)
Click to expand...
Click to collapse
Random write of small blocks to the internal eMMC is slow. Flash memory has huge erase blocks (typically a couple of megabytes) and large write blocks. Writing 4KB is a relatively slow process.
jake21 said:
but from various threads it sounds like third party os fix the issue.
Click to expand...
Click to collapse
They can't fix it, they can only work around the issue by tweaking the kernel's caching parameters. Or, in the extreme case, disabling the fsync system call. Usually, an application calls fsync to ensure data has been written to the disk, so that even in case of a following crash or unexpected power loss the data on the medium is consistent. And normally fsync waits until the write command has completed. If you disable fsync, the app no longer has to wait, therefore no more lag. The data still resides in the RAM and is eventually written to the card by the background cache flush thread.
Downside of disabling fsync: If the tablet crashes in the wrong moment, you may in the worst case lose all your data, run into a bootloop, etc.
If this is true, how come with original release ICS 4.0.3 everything runs smooth and fast. I installed latest Jelly Bean 4.1 and slow real bad, i downgraded back to ICS 4.03 and it is fast again. Is it a driver issue. It can't be hardware.
Thanks
_that said:
Random write of small blocks to the internal eMMC is slow. Flash memory has huge erase blocks (typically a couple of megabytes) and large write blocks. Writing 4KB is a relatively slow process.
They can't fix it, they can only work around the issue by tweaking the kernel's caching parameters. Or, in the extreme case, disabling the fsync system call. Usually, an application calls fsync to ensure data has been written to the disk, so that even in case of a following crash or unexpected power loss the data on the medium is consistent. And normally fsync waits until the write command has completed. If you disable fsync, the app no longer has to wait, therefore no more lag. The data still resides in the RAM and is eventually written to the card by the background cache flush thread.
Downside of disabling fsync: If the tablet crashes in the wrong moment, you may in the worst case lose all your data, run into a bootloop, etc.
Click to expand...
Click to collapse
Ok I can understand the issue with fsync and small writes but then all tablets would have this issue (unless the infinity used particularly poor chioce of hardware). Also does this indicate if writes were disabled in (for example) browsers then they would be silky smooth ?
-
It would be nice if the tablet could mark certain directories as critical and flush those faster than other directories (perhaps abusing the meaning of the sticky bit on the directory). Certain non critical data can avoid having immediate flush (though if andriod apps are calling fsync explicitly there might be some stickyness in changing the behavior of the api). Anyways is my understanding correct that you are indicating that Asus used a particularly poor choice of eMMC or tweaked the kernel to flush more frequently ?
_that said:
Random write of small blocks to the internal eMMC is slow. Flash memory has huge erase blocks (typically a couple of megabytes) and large write blocks. Writing 4KB is a relatively slow process.
They can't fix it, they can only work around the issue by tweaking the kernel's caching parameters. Or, in the extreme case, disabling the fsync system call. Usually, an application calls fsync to ensure data has been written to the disk, so that even in case of a following crash or unexpected power loss the data on the medium is consistent. And normally fsync waits until the write command has completed. If you disable fsync, the app no longer has to wait, therefore no more lag. The data still resides in the RAM and is eventually written to the card by the background cache flush thread.
Downside of disabling fsync: If the tablet crashes in the wrong moment, you may in the worst case lose all your data, run into a bootloop, etc.
Click to expand...
Click to collapse
gordo2000 said:
If this is true, how come with original release ICS 4.0.3 everything runs smooth and fast. I installed latest Jelly Bean 4.1 and slow real bad, i downgraded back to ICS 4.03 and it is fast again. Is it a driver issue. It can't be hardware.
Click to expand...
Click to collapse
I didn't see a big performance difference between ICS and JB, even though JB should be even faster after all the "Project Butter" work. What is slow for you on JB?
---------- Post added at 07:35 PM ---------- Previous post was at 07:30 PM ----------
jake21 said:
Ok I can understand the issue with fsync and small writes but then all tablets would have this issue (unless the infinity used particularly poor chioce of hardware). Also does this indicate if writes were disabled in (for example) browsers then they would be silky smooth ?
Click to expand...
Click to collapse
Many people said the TF700's eMMC is slower than good microSD cards, and that can be reproduced with benchmarks.
To check how the browser performs if it doesn't have to write to the eMMC, install Browser2RAM, which moves the browser cache to a ramdisk. In my experience, there is still lag on some pages - so not all slowdowns seem to be I/O-related. It would be interesting to find out the real cause of this.
There may be another I/O situation except random writes: large writes which block small reads from another process. HPI should help here, but I think the 3.1 kernel doesn't support it yet.
_that said:
I didn't see a big performance difference between ICS and JB, even though JB should be even faster after all the "Project Butter" work. What is slow for you on JB?
---------- Post added at 07:35 PM ---------- Previous post was at 07:30 PM ----------
Many people said the TF700's eMMC is slower than good microSD cards, and that can be reproduced with benchmarks.
To check how the browser performs if it doesn't have to write to the eMMC, install Browser2RAM, which moves the browser cache to a ramdisk. In my experience, there is still lag on some pages - so not all slowdowns seem to be I/O-related. It would be interesting to find out the real cause of this.
There may be another I/O situation except random writes: large writes which block small reads from another process. HPI should help here, but I think the 3.1 kernel doesn't support it yet.
Click to expand...
Click to collapse
Agreed. However, even with Browser2Ram, I'm betting that there is still some i/o with the emmc that cannot be hijacked by broser2ram, and therein lies the problem. If nothing else, using RAM like that may force the tablet (b/c of screwy coding) to start paging data to...yup, you guessed it, emmc ... a lot sooner than it actually needs to.
Has anyone tried B2R along with dev settings to kill apps ASAP that are not in use? Perhaps this could lengthen the time before paging starts to occur?
Sent from my ASUS Transformer Infinity TF700 running Android JB (rooted) via Tapatalk
Overall, everything runs smooth on ICS 4.0.3. Browser, opening app, games, there is no hick ups when watching movies, that happen alot on JB 4.1.1. The whole OS is smoth redrawing. On JB, there is always a wait few seconds to open application folders or closing it. I did reformat to default but no help.
_that said:
I didn't see a big performance difference between ICS and JB, even though JB should be even faster after all the "Project Butter" work. What is slow for you on JB?
---------- Post added at 07:35 PM ---------- Previous post was at 07:30 PM ----------
Many people said the TF700's eMMC is slower than good microSD cards, and that can be reproduced with benchmarks.
To check how the browser performs if it doesn't have to write to the eMMC, install Browser2RAM, which moves the browser cache to a ramdisk. In my experience, there is still lag on some pages - so not all slowdowns seem to be I/O-related. It would be interesting to find out the real cause of this.
There may be another I/O situation except random writes: large writes which block small reads from another process. HPI should help here, but I think the 3.1 kernel doesn't support it yet.
Click to expand...
Click to collapse
johnlgalt said:
Agreed. However, even with Browser2Ram, I'm betting that there is still some i/o with the emmc that cannot be hijacked by broser2ram, and therein lies the problem.
Click to expand...
Click to collapse
Not all I/O automatically leads to problems. The latest version of Browser2RAM only redirects the browser cache, it does not affect browser settings, bookmarks, etc. - which is usually a good thing.
johnlgalt said:
If nothing else, using RAM like that may force the tablet (b/c of screwy coding) to start paging data to...yup, you guessed it, emmc ... a lot sooner than it actually needs to.
Click to expand...
Click to collapse
No. Paging does not occur *to* the eMMC (no swap space is configured on the TF700), but only *from* the eMMC, to fetch pages of executable files. While it is true that the ramdisk for the cache uses some memory, it would only make a difference if you have lots of background apps competing for RAM.
A first step to see how much I/O happens is to watch the output of "iostat".
Thanks for the heads up. So, why does it still cause pauses and the like then?
Sent from my ASUS Transformer Infinity TF700 running Android JB (rooted) via Tapatalk
johnlgalt said:
So, why does it still cause pauses and the like then?
Click to expand...
Click to collapse
Good question. It's time to find out.
Arm yourself with multiple adb shells and watch the output of iostat, top, free, or whatever else you can think of that displays interesting metrics. Then do something that causes lag and see if you notice a specific pattern.
(I am currently away from my main PC, and the SSD in my laptop decided yesterday it no longer wants to read ntkrnlpa.exe - so no adb for me right now)
I'll need a bit more specifics - I know adb well enough and can shell, but these other ... executables you're mentioning are new to me.
I'm on vacation in Hawaii, and have a Windows 7 based laptop that I can use, so I can do this no problem - but not today. About to go see some sights before going on a Lava Boat tour at 4 PM local, which means I'll be bushed when I get back - plus I'm fighting a nasty ear infection that aches something awful.
AFAIK, though, I have no real plans for tomorrow or Friday, so I can take some time and investigate.
Also, FWIW: I'm rooted but have not (yet) unlocked my bootloader - mainly b/c I purchased the 64 GB version of the tablet and it is a C50, so I'm hoping something 'breaks' enough for me to get a replacement (c70? C90 even? )- and hoping even more that it is running something under ICS .30 so I can nvflash a backup and not have to worry about goofing things up when I *do* unlock the bootloader.
Sent from my ASUS Transformer Infinity TF700 running Android JB (rooted) via Tapatalk
iostat and top are standard unix utilities. They would only be useful if run on the phone so I must presume andriod has versions. A bit of 'googling' and htere appears to be a developer's kit that includes stuff like iostat. If hte switches are the norm then something like "iostat -x 2" will produce nice output of performance of each 'disk'. top is a tool that shows cpu usage.
-
I've not done any developing for andriod (maybe I should bite hte bullet?) so have never tried to use adb or similar but i've done a bit of system development on linux (though I very rarely muck with the kernel; i prefer to work one layer above the kernel).
johnlgalt said:
I'll need a bit more specifics - I know adb well enough and can shell, but these other ... executables you're mentioning are new to me.
Click to expand...
Click to collapse
If you haven't done it yet, install BusyBox on your TF700. Then just open one or more command windows on your PC, run adb shell in each, and run "iostat 1" in one window, "top" in another, and maybe also adb logcat in yet another window.
That gives you up-to-date statistics about I/O and processes which currently use CPU time. Then try to use your tablet normally, and when it lags, watch the output on your PC if you see a big number of writes or a process eating CPU.
But don't forget to enjoy your vacation.
_that said:
If you haven't done it yet, install BusyBox on your TF700. Then just open one or more command windows on your PC, run adb shell in each, and run "iostat 1" in one window, "top" in another, and maybe also adb logcat in yet another window.
That gives you up-to-date statistics about I/O and processes which currently use CPU time. Then try to use your tablet normally, and when it lags, watch the output on your PC if you see a big number of writes or a process eating CPU.
But don't forget to enjoy your vacation.
Click to expand...
Click to collapse
Awesome. Busybox already installed here, so this should be easy enough.
And I never forget to enjoy ... anything. lol
Sent from my ASUS Transformer Infinity TF700 running Android JB (rooted) via Tapatalk
_that said:
Good question. It's time to find out.
Arm yourself with multiple adb shells and watch the output of iostat, top, free, or whatever else you can think of that displays interesting metrics. Then do something that causes lag and see if you notice a specific pattern.
(I am currently away from my main PC, and the SSD in my laptop decided yesterday it no longer wants to read ntkrnlpa.exe - so no adb for me right now)
Click to expand...
Click to collapse
hey fellas, have you seen chainfire's app : PerfMon
http://www.xda-developers.com/android/perfmon-floats-your-devices-performance-on-screen/
real time stats including io reads/writes to both mmcblk0/mmcblk1... :good:
ps: i have always loved "android status" aswell ... oldy but a goody
https://play.google.com/store/apps/details?id=com.AndroidStatus&hl=en
Important Documents for Kernel Developer
Project BricksBUGS by @Lanchon
http://forum.xda-developers.com/and...roject-brickbug-aftermath-recovering-t2823051
As you guys may know our devices got a eMMC superbrick bug since 2012 which posted by CM TEAM
it also present a report and patch solution provide by Samsung was disable the secure erase and trim(even nonsecure erase and trim also removed/disable) completely
Affected eMMC chip by our devices (VAL00M)MoviNAND
but it also cause one problem,Trim function broken
CM Report and workaround
http://wiki.cyanogenmod.org/w/EMMC_Bugs
Trim documents(Wikipedia):
https://en.wikipedia.org/wiki/Trim_(computing)
(This has been happen and report nearly 2 years ago,but no one concern or even intended to do somthing for bring back the trim)
Why Trim is important?Without it what happen will cause?
Trim is not a new technology in SSD even Flash memory or NAND Flash chip
Without Trim,it did not means our phone will have any data lost,
but you should feel your devices getting slower day by days,
Because the system cannot told to the SSD/eMMC controller the block has been cleared
also SSD/eMMC controller doesn't know the blocks has been cleared too
it just increase the searching time and need to random moving the sector which may not needed....
How does the SSD/eMMC sector design and Operation for doing writing?
the SSD/eMMC sector was using 4k as smallest store capacity(we called 1 paged),1 sector block=128Page So that mean
1 block sector size was 512KB,Even the SSD or eMMC or NAND minium writing/reading could smallest as 4KB
but when you need to do a wipe/erase operation,it need operate one block(512kb) each time
(Write=>4KB,Wipe=>512KB eachtime on new data writing operation)
Let's use an example for more easier learning:
When you need to write a new 3kb files to the SSD/NAND flash,The data will store as 4kb in the block sector page(even your data has not reach 4kb),(4KB=1 Page, 128Page = 1 Block=512 KB)
However when a new data need to be write,if the block sector has been present a data already,SSD/NAND/eMMC controller cannot just wipe that 4KB sector to write a new data over it.
it need to move the one blocks(each blocks as 512 KB=128Page) data to the cache>wipe and erase the blocks>Modified your writing files(4KB) in the cache and move it back to that block sector
That means if more writing operation,the SSD and NAND/eMMC do more above action=>then it will decrease the performance,because it need to always moving back from cache to block or block to cache
How does Trim doing to solve above situation??
1.Trim work like a maid of your SSD/NAND/eMMC controller
When you deleted the files,Trim function will just marked the block which need to be wiped for writing operation and will wait until the system was idle,then it will doing the above action in background
2.Trim also work like a recyclers/Garbage collector ,it could told the eMMC/SSD/NAND control to wipe and controlled each sector blocks more effective(no need to move any unrelated or random blocks on new writing operation each time) and decrease the moving time for block wipe action
Why we cannot be Trim?
As My mention at first paragraph due to the superbrick bug
Samsung/CM workaround was put the erase function disable completely in the kernel
(remember our eMMC support trim!)
to bring back the trim work it need to reenable the erase function
Trim require erase function work as well
even we have secure erase bug,it could still using non-secure erase for trim to work
Does it means there was no solution to brick back the trim
the answer is No,
Luckly some xda menber already know about this and try to workaround to bring back trim function work
which call Project BricksBUGS
http://forum.xda-developers.com/and...roject-brickbug-aftermath-recovering-t2823051
I have already adding the fstrim to the kernel even to the Rom
Does it means it could doing?
No,As I mention the erase function didn't enable(No one intended to do also)
,so you cannot use trim function
not relation with your android version,
Someone said 4.3 or higher cannot run Lagfix and will got operation was not support at endpointed devices
that is incorrect,
Because Google just implement the automatic trim,which was calling fstrim to do such function started at 4.3 while the users was idle.
if the kernel has disabled the require feature,even it have any binaries,it doesn't mean trim can work no matter any android version you have.the function was provide by hardware requirement and Kernel,not the system itself
That means it require our SGR Kernel experts to concern and reenable about it
if we need trim you have to re-enable the erase
For workaround to bring back function
please follow the project brickbug.(it need to modified the)
http://forum.xda-developers.com/and...roject-brickbug-aftermath-recovering-t2823051
But our devices was affected by brick bugs,It still safe to trim?
Maybe. Folklore says no,
For me,I will says"you should know that,any action which would touch the internal setting have a risk would cause your devices broken."However even the secure erase have bug,which could use an alternative solution
non-secure trim and erase function,it need redirects those action to non-secure
Also If you ask me to choose for performance but will have risk to brick and stable but slower than usual...
I will choose the first one,because even you were safe to prevent brick risk,
but the phone was slower than a snail day by days,"which should not be happen",this could cause more trouble than brick risk....
I would choose to let my phone in a risk of brick(but have performance enhance) instead of stable but slower than cannot be used daily
According my test result,I would says the Trim could bring back it was safe.
Here is my experiment,it was using stock ROM and kernel also without the any Samsung patch Kernel
Testing Enviroment:4.0.4(Official ICS TGY-ZSLPD)(no any patch!,just rooted)
this test has been run more than a months already,I have schedule to perform a fstrim each days too,
My Phone performance now back to same as fast as when I was bought this phone
apps installed quickly,writing more fast than you expected
Remarks:
1.It require the kernel which without the patch,and you could run the trim correctly
2.You could using the Lagfix or terminal with busybox which include fstrim
I"m not sure other SGR buddy would same as my result,(no data lost or brick)
but I would say,According my result
I would support to bring back the trim for our devices as experiments
if you are worrying on the risk,don't do any roots even trim or kernel change cause these type of action will cause damage too
For Anyone who are interst on bring the trim back to work
Please look at the below link for more info
Important Documents for Kernel Developer
Project BricksBUGS by @Lanchon
http://forum.xda-developers.com/and...roject-brickbug-aftermath-recovering-t2823051
nice article bro... keep up d good work..
Great! Hoping for it work on SGR.
Good article buddy!
vipul12389mehta said:
nice article bro... keep up d good work..
Click to expand...
Click to collapse
Before the patch,the trim was work,
let's look at the attachment,that is the result with original stock kernel without Samsung/CM Brick patch
Actually I hope anyone who are experts on making or modifying our kernel
Would take a consideration or change to bring back trim to work as soon as possible
I have compare the performance with and without trim
the result was quite huge difference...
Trim can reduce the bottle lack within the eMMC and improve write performance,
as we know all the apps including system doesn't only read,also have write operation too
on the otherhand reduce write performance also can release more CPU presures too,
it doesn't need to take more resorce on moving and writng blocks sector each-time
The question is,no one intended to do this good thing for bring back the Trim to work almost 2 years ago until now ....
@Adam77Root @Grarak
please have a look and try to consider to bring trim back for our kernel
it may improve performance
hope this ariticle could help you guys on how to porting it
although Im not a developer
,I must said we all pleasure your work to make our phone having longer support life than usual and future developments
even samsung has left us behide all their premium phone series
Alternative...
Hi
I am not sure if this is a workaround, but let me just give it (this is based on old concept of erase everything and put it back).
1. take a nandroid backup
2. erase system / data / cache partition
3. restore the nandroid backup
This is cumbersome and takes longer downtime.
Edit: This is not my original idea, there is a thread suggesting this on XDA (not able to recall that now). Credit should go to that guy
aaa839, if I have read the Project Brickbug thread correctly, there is no compiled kernel available for SGR right (or are the kernels device independent?).
yes,it require us to D.I.Y. So I decide to build a rom and kernel which include this patch for our devices
I will base from @Adam77Root @Grarak nvida L4T source code
With @karithik Omni4.4 source but I need more help……
I will try my best on build the build
with trim work kernel
Ideally I'd like to actually be able to put an arbitrary OS on my tablet. Has anyone figured out how to do this with the Fire HD 8, 8th generation, yet? I'm willing to use soldering to do so. If this is not possible with this hardware, does anyone have a recommendation about what tablet options I have, to do this?
dc123123 said:
Ideally I'd like to actually be able to put an arbitrary on my tablet. Has anyone figured out how to do this with the Fire HD 8, 8th generation, yet? I'm willing to use soldering to do so. If this is not possible with this hardware, does anyone have a recommendation about what tablet options I have, to do this?
Click to expand...
Click to collapse
There is a hardmod to secure root but no custom roms. Consider a 3rd gen HDX if seeking a selection of Nougat based custom ROMs.
Davey126 said:
There is a hardmod to secure root but no custom roms. Consider a 3rd gen HDX if seeking a selection of Nougat based custom ROMs.
Click to expand...
Click to collapse
It shouldn't be possible to modify /system on the 8th generation as it ships with dm-verity enabled.
xyz` said:
It shouldn't be possible to modify /system on the 8th generation as it ships with dm-verity enabled.
Click to expand...
Click to collapse
Ok, interesting, but how are software updates possible on devices with dm-verity? Doesn't that change the file/partitions to the extent that they need a new key for it?
Essentially the firmware update includes a hash of the /system partition (a tree of hashes) signed with Amazon's private key. The boot image includes the public key so that the device can verify the /system integrity. The rest of the boot process already ensures that the boot image isn't tampered with. So to update they just need to generate another /system, generate another hash tree for it and sign it with the private key.
xyz` said:
Essentially the firmware update includes a hash of the /system partition (a tree of hashes) signed with Amazon's private key. The boot image includes the public key so that the device can verify the /system integrity. The rest of the boot process already ensures that the boot image isn't tampered with. So to update they just need to generate another /system, generate another hash tree for it and sign it with the private key.
Click to expand...
Click to collapse
https ---- source.android ---- .com/security/verifiedboot/dm-verity
Yeah this seems to indicate that the more of that partition that is accessed during the runtime, the more time and power are invested into verification during use. Lame.
Anyways, thank you very much, guys, for answering the questions.
I'm also interested in knowing why it isn't possible to just redo the system partition but modify it to match the hash... but I'll wiki that to find out why it doesn't work (SHA-256). I'm guessing that this isn't as simple as a checksum. Even if it meant making another file system to do it... I guess the other option is getting the private key.
W.R.T. building a tablet that doesn't suck, the project that I saw was basically someone who slightly dissassembled a SBC, so that they didn't have to design a mobo and find a chip for it. That makes sense but the SBC that I looked at (Orange Pi Zero ) don't actually seem to use all the options/connectivity provided in the datasheet of the SoC chip. For instance, it's capable of HDMI but there's no HDMI on the board, etc. Also, the project I read about ended up getting very expensive.
Another thing from that site:
"A public key is included on the boot partition, which must be verified externally by the device manufacturer."
What do they mean by verified externally? Like, I know with my tablet I needed to connect to the internet at least once to do anything with it.
This is the part that makes me think that we can mess with it:
" To mitigate this risk, most manufacturers verify the kernel using a key burned into the device. That key is not changeable once the device leaves the factory."
Also, I want to translate this into noob:
" And since reading the block is such an expensive operation, the latency introduced by this block-level verification is comparatively nominal."
Does that mean that the time spent reading the hash is small because there's a lot of calculation involved, such that the transmit time is small compared to the total time, or that the transmission of the next block info can happen during the calculation of the prior block info?
dc123123 said:
Another thing from that site:
"A public key is included on the boot partition, which must be verified externally by the device manufacturer."
What do they mean by verified externally? Like, I know with my tablet I needed to connect to the internet at least once to do anything with it.
Click to expand...
Click to collapse
It means that the boot partition (the boot.img) is verified by the bootloader, in our case that would be LK.
dc123123 said:
Also, I want to translate this into noob:
" And since reading the block is such an expensive operation, the latency introduced by this block-level verification is comparatively nominal."
Does that mean that the time spent reading the hash is small because there's a lot of calculation involved, such that the transmit time is small compared to the total time, or that the transmission of the next block info can happen during the calculation of the prior block info?
Click to expand...
Click to collapse
They're saying that there's no performance hit when enabling dm-verity because reading the actual data from the emmc is so slow you won't notice the additional slowdown introduced by hashing and verifying the block data.