How can I enable admin rights to lockscreen? - Android Software/Hacking General [Developers Only]

Hi,
I'm trying to write a volume toggle app so that I can reference it from the lockscreen to mute/unmute my Vision.
I've followed the android developers guides to "Hello World" etc.. and have successfully created an app that switches between mute and unmuted sound levels. But...
... when I use the app, the lockscreen is disabled, so then I have to re-lock. What i'd like to do is to use the app to switch volume and then return to the lockscreen. From what I've read so far you need to use the DeviceProcessManager API and that's where it gets complicated. I've looked at the samples, but I don't need a whole visual based app. I need to enable device admin rights for the force lock policy and store that somehow. I think also that that is done in the "oncreate" class and then my toggle would take place in the "onresume" class, but this is where my knowledge/experience break down. Can anyone point me towards any other examples of:
1. How to use the DPM to enable an admin policy for an app
2. How to identify if current screen state is locked or unlocked
Many thanks
Sent from my HTC Vision using XDA

Related

[APP] Automagic

For those not aware of automagic:
Automagic is a automation application that uses a flow-chart interface to set-up the automations you want. It has a few pre-programmed flows (mostly hidden in the 'catalog'), but it's mainly the imagination that's the limit. So my reasoning for this post is collecting more flow charts or ideas, share them with others and hopefully inspire more.
Link
Automagic for Gleeo
Automagic premium
Automagic for gleeo is in principal for their time managing app Gleeo, but it can be used for a number of different tasks as wel.
Automagic premium is their paid app, I know that's commercial but it only features more actions/triggers/... than the free version, which is already pretty versatile on its own.
Flow charts
flow charts typically consist of
- a trigger (rectangle box with rounded corners)
- a condition (diamond box)
- an action (rectangle box)
My flow charts
- basic airplane mode by clock (see catalog by pressing the menu button)
- a basic wifi connected > disable mobile data // wifi disconnected > enable mobile data
- 2 NFC triggered flows (but they could be altered to location/time)
This app looks interesting. Love the flow chart design.
I downloaded the lite version, but there seem to be very little things I can do with it.
I dont mind paying for the pro, but would like to test the task that I wish to setup firtst; as I've already bought AutomateIt and Tasker (too complicated for my usage). AutomateIt does not have some actions that I wish to setup e.g. WIFI tethering.
Is there a trial app? or do you happen to have a Xperia S, and can confirm if the following action work:
1) trigger on/ off WIFI tethering (without intervention)
2) trigger on/ off GPS (without intervention)
3) trigger on/ off NFC (without intervention)
This application shows a lot of potential. I bought the pro version and will play around with it as soon as I have some time. In the mean time, I downloaded and activated and313's Proximity Screen On/Off flow. It works pretty well except for a weird admin problem message when trying to turn off the screen.
Overall, a good program with a lot of potential. Well done guys :good:
Edit: Enabling admin privileges for Automagic from settings solves the issue.
Been using this as a replacement for Tasker and have been very happy with it. The flowchart style profiles and logging options make creating complex profiles much easier. The other thing I like is that many options like NFC or lockscreen toggling are built in and it doesn't need all the extra addins/plugins that I needed to get things done with Tasker.
Kinda surprised this isn't popular. Any reason why? Seems like it's a very well made app.
utter! + Automagic + permanent voice recognition
Hi, good day all.
I'm trying to use command:2 (invoke permanent voice recognition) intent via automagic instead of Tasker. I much prefer Automagic and will not turn away from it. (I'm a novice re: programming language if even that)
I'll explain what I have tried so far, hopefully someone can lend a helping hand.
In Automagic I have tried "Start Activity" and "Send Broadcast" Action Types.
I'm making some progress with the "Start Activity" Action where my...
-ACTION TYPE = "Start Activity"
-ACTION = "android.intent.action.SEND"
-Explicit Component Checked...
PACKAGE NAME = "com.brandall.nutter"
CLASS NAME = "com.brandall.nutter.EIH"
EXTRAS =
"putString(command, 2);
putString(password, mypassword123)"
The ACTION results in a toast "utter! password failure for external application", it looks like the command is registering, but the password is not.
My utter! POWER USER PASSWORD = "mypassword123" literally, to make things simple... that password worked for me when invoking command:2 in Tasker. I've tried different alphanumeric passwords in utter! as well.
In other words I'm trying to set the data extras in automagic to "command:2" and "password:mypassword123", but, Automagic forces ";" and forces "putString" prestatements, if that makes any sense.
It looks like automagic has all intent Function, Variable, and Snippet options ie: putString, putBoolean, putFloat, putDouble etc.
How can I get Automagic activate the permanent voice recognition function in utter!?
It would be great to just be able to have some kind of activity shortcut to turn on off voice recognition instead of this, or just have it boot with phone boot. But I'm definately not complaining as utter! is free and very good!, thanks!
RESOLVED!
The great people at Automagic have replied to me (very quickly I might add) and have resolved my problem.
So for those of you who wish to control the permanent voice recognition of utter! via Automagic, here's the solution...
"Hi,
Your configuration almost works, only the extras field needs some small adjustments:
-ACTION: doesn't matter
-Explicit Component: Checked
-PACKAGE NAME: com.brandall.nutter
-CLASS NAME: com.brandall.nutter.EIH
-EXTRAS:
putInt("command", 4);
putString("password", "test");
It seems that utter uses an int for the command and not a string so you have to use the putInt function in this case.
The first parameter of the putInt and putString function needs to be a string, therefore the key has to be put into double quotes otherwise it would refer to a variable with this name.
Please also see the help page of action 'Script' for a description of the scripting language:
http://automagic4android.com/en/help/components#action_script
Utter also offers a plugin which can be used in Automagic with the action 'Plugin' but I'm not sure if utter allows to control the same functions using the plugin.
Automagic ROCKS!:good:
unknown cell ID's
In Llama, there's a trigger to invoke an action whereby if you are connected to unknown cell ID's the action is triggered. I wonder if it's possible to do the same on Automagic. Any ideas?
Hi,
please add a new action > remove / disable alarm
edit: calendar event from google calendar works also? edit 3: tested, its working
edit 2: Location based condition need a button "current Location"
cheers
starbase64
starbase64 said:
Hi,
please add a new action > remove / disable alarm
edit: calendar event from google calendar works also? edit 3: tested, its working
edit 2: Location based condition need a button "current Location"
cheers
starbase64
Click to expand...
Click to collapse
Try posting your requests on the automagic site. I don't think the developer has anything to do with this thread.
http://automagic4android.com/en/
simonwil said:
Try posting your requests on the automagic site. I don't think the developer has anything to do with this thread.
http://automagic4android.com/en/
Click to expand...
Click to collapse
Yes, I forgot to mention I'm not the developer,
Just made this post to form a basis to share flows with other xda users.
For official news/posts/questions/... use the links provided in the post.
Firehead said:
Kinda surprised this isn't popular. Any reason why? Seems like it's a very well made app.
Click to expand...
Click to collapse
I've been wondering that as well. This is a truly amazing app, and automating my phone has never been this fun. Automagic is compatible with all the Tasker plugins, and I have yet to find anything I can't do with this app that I could do with Tasker.
Here is part of my battery saving flow, enabling data traffic for a short while four times per hour while the screen is off (except for during nights and when battery is low), at the same time making sure not to turn off data if there is any ongoing traffic.
I could do the same in Tasker, but I find it so much easier to do when you can see the whole automation flow like this. Also, it looks impressive
malexin said:
I've been wondering that as well. This is a truly amazing app, and automating my phone has never been this fun. Automagic is compatible with all the Tasker plugins, and I have yet to find anything I can't do with this app that I could do with Tasker.
Here is part of my battery saving flow, enabling data traffic for a short while four times per hour while the screen is off (except for during nights and when battery is low), at the same time making sure not to turn off data if there is any ongoing traffic.
I could do the same in Tasker, but I find it so much easier to do when you can see the whole automation flow like this. Also, it looks impressive
Click to expand...
Click to collapse
How is this profile working for you? Would you mind sharing it?
steelersmb said:
How is this profile working for you? Would you mind sharing it?
Click to expand...
Click to collapse
It's working very well, and my battery lasts noticeably longer. Previously I was using a separate app for this, but making my own flow was more fun, and works just as well.
Here are the flows for you (one for when turning the screen off and one for when turning it on).
malexin said:
It's working very well, and my battery lasts noticeably longer. Previously I was using a separate app for this, but making my own flow was more fun, and works just as well.
Here are the flows for you (one for when turning the screen off and one for when turning it on).
Click to expand...
Click to collapse
Thanks, brow, but how do I import this .xml file into the app?
Firehead said:
Kinda surprised this isn't popular. Any reason why? Seems like it's a very well made app.
Click to expand...
Click to collapse
I think it's because he doesn't use advertisement for promote hes app
Importing flow as XML file
fuone said:
Thanks, brow, but how do I import this .xml file into the app?
Click to expand...
Click to collapse
Hi,
also tried to import xml files into Automagic Premium (AP)
But whithin the app you just get some bad Apps to select for opening the importing file. (Look at: Menue=>Manage=>Import Flows/Widgets)
So here is how it works for me:
- search file with any file browser
- try to open it and select AP for this action
- done
Next you open AP you'll find the flow at the "Ungrouped" folder, or in the folder defined by the programmer.
Hope this can help.
Hi,
press the 3dot to open the quick menĂ¼ for this flow and then press send, now you can attached the xml file here
regards
starbase64
This is best automation app i have ever used as i used tasker and lima.. But this app is so powerful and so fun.. I love it.. And so easy to use.. Recommended for everyone...
I know this is an old thread, but I just found out about it and was wondering if anyone had any amazing flows they'd like to share?

[XPosed Module][Dev]Add commands to audiomonitor [CLOSED]

Hi there,
I started a project, to create a xposed module, so we can add differents commands directly into the audio monitor ("OK Google Now").
I decompiled the AudioMonitor.apk, and I needed help to find the methods that recognize the voice, and active the commands (like navigate, call, etc).
If someone can help me, I uploaded in github the decompiled version of the apk.
https://github.com/caioketo/AudioMonitorDec
So if anyone wants to help, I will apreciate it.
And also, I'm creating in github the source of this module, so it stay open source.
So this is it for now.
[EDIT]
So I found this class: GenericRecognizer (https://github.com/caioketo/AudioMo.../motorola/audiomonitor/GenericRecognizer.java)
that have this method "processRecognizedAction", so I think I'll try that out when I got home.
[CLOSED]
Unfortunally, it seems not be possible to get both things working together, "Moto commands" and "autovoice", and for this reason, I'll stop developing this, since losing the moto commands is not an option for me, and autovoice doesnt recognize from a recorded audio, it seems useless putting more effort on this.
Very cool. Thanks for looking into this. Hopefully it turns out well.
great!
This will be great! could potentially add loads of extra value to touchless control. I knew rooting was worth it
If you haven't seen it already, you could try to use the Google Now/Search API: http://forum.xda-developers.com/showthread.php?t=2554173
It's also open source, so you could try to see how it works if you wanted to work on your own module instead of working with the API.
Also, it costs money, but you can use AutoVoice + the Now API + Tasker to do just this.
iSecks said:
If you haven't seen it already, you could try to use the Google Now/Search API: http://forum.xda-developers.com/showthread.php?t=2554173
It's also open source, so you could try to see how it works if you wanted to work on your own module instead of working with the API.
Also, it costs money, but you can use AutoVoice + the Now API + Tasker to do just this.
Click to expand...
Click to collapse
I have AutoVoice + Now API + Tasker already, but the problem is that Now API needs connections, and it takes too long, to touchless control pass the parameter to google now and google now proccess it.
If it is already inside the touchless control, it will reduces the time in like 10s. I thinks it worth the work.
caioketo said:
I have AutoVoice + Now API + Tasker already, but the problem is that Now API needs connections, and it takes too long, to touchless control pass the parameter to google now and google now proccess it.
If it is already inside the touchless control, it will reduces the time in like 10s. I thinks it worth the work.
Click to expand...
Click to collapse
I have the same setup and I agree with you.
Have you made any progress on this si far?
Charlie.igg said:
I have the same setup and I agree with you.
Have you made any progress on this si far?
Click to expand...
Click to collapse
Yeah, i tested it with that method, and every command fires the method, so I'll try to log all paramters to see where is the recognized text,
and try to exit without calling the google now.
If I can get it done, I'll start building something to customize commands, I wanna go with tasker, but I never build any tasker plugin, if someone can help me, it could go faster.
I think that is the wrong method, as all parameters are useless.
I'm trying to find another method to hook up.
The problem is that with a decompiled version it's very hard to find something
Good news everyone I found a class that have a method that is called everytime it recognizes, with the words and commands in JSONObject.
https://github.com/caioketo/AudioMonitorDec/blob/master/src/f/b.java
this is the class, and the JSONObject parameter have the words, and everything else, I will try to study it more, if I confirm that we can work with this method, we will be one step forward.
Not so good news, that class get a JSON for every thing you say IF it is a command, if you say call, it will return correct.
If not a command (google search) it just send "GARBAGE" and ignores it, I found another class that call googlesearch, but it goes on audio.
It dont recgonize what isnt a command, just call:
Code:
uri = Uri.parse((new StringBuilder()).append("content://com.motorola.audiomonitor.raw_audio/command_audio/").append(s1).toString());
intent = new Intent("com.google.android.googlequicksearchbox.VOICE_SEARCH_RECORDED_AUDIO");
intent.setData(uri);
intent.addFlags(0x10000000);
So it pass an audio file to google search proccess, so we will need some audio recognition lib to help us here.
So I got it to hook the correct methods, getting the correct variables, and results, also getting the raw audio file that it sends to google search.
Now I need to process this file, with some recognition lib, since I never worked with it, I don't know where to start.
If someone could help me here it would be a lot easier.
Yes, help this OP out! This could become one of those 'must-have' mods for any Moto X (or Android) user!
I sure wish I knew more about this to help. It looks like they are using Dragon speech reg
I really think the better (and easier) path would be to simply activate the native google now/voice search activity on hotword detection. (As opposed to processing and passing the audio like audio monitor does now)
This way you can take advantage of the now api that others are actively extending. The dragon stuff in audiomon is proprietary and limited to the X8 devices.
deficitism said:
I really think the better (and easier) path would be to simply activate the native google now/voice search activity on hotword detection. (As opposed to processing and passing the audio like audio monitor does now)
This way you can take advantage of the now api that others are actively extending. The dragon stuff in audiomon is proprietary and limited to the X8 devices.
Click to expand...
Click to collapse
The problem is that to launch the google now activity it takes too long.
Im discussing it in the google+ community of autovoice:
https://plus.google.com/104794905921909929735/posts/Gisu3ccFbYy
I think we are finding a solution that would be grab the microphone to autovoice instead of touchless control the moment that you say "OK google now" and then everything will be in tasker
The only possible way right now is grab the mic after the "PING" sound, and send it to autovoice.
But this probably would break motorola commands, or, we will need to find a way around.
What about have autovoice background listening start when touchless control is the foreground app? Then autovoice could intercept commands and kill touchless control when it caught one, but if not, all touchless control commands are still usable
Sent from my XT1060 using Tapatalk
weldawadyathink said:
What about have autovoice background listening start when touchless control is the foreground app? Then autovoice could intercept commands and kill touchless control when it caught one, but if not, all touchless control commands are still usable
Sent from my XT1060 using Tapatalk
Click to expand...
Click to collapse
It's not possible since touchless controll lock the microphone, auto voice would listen nothing.
weldawadyathink said:
What about have autovoice background listening start when touchless control is the foreground app? Then autovoice could intercept commands and kill touchless control when it caught one, but if not, all touchless control commands are still usable
Sent from my XT1060 using Tapatalk
Click to expand...
Click to collapse
If you wanted to go that route, you might as well just use tasker to kill touchless control and launch autovoice when touchless control comes to the foreground
Unfortunally, it seems not be possible to get both things working together, "Moto commands" and "autovoice", and for this reason, I'll stop developing this, since losing the moto commands is not an option for me, and autovoice doesnt recognize from a recorded audio, it seems useless putting more effort on this.

[APP] Betterish Touchless Controls

I love the fact that we can launch a voice search from anywhere, anytime.
What I don't love is the "man in the middle" approach that is used. It slows down the process, and in my opinion, looks unsightly.
So, I made the process more streamlined using the power of OkGoogleNowTriggerIntent, Secure Settings (both required), and Tasker.
Download: https://docs.google.com/file/d/0B_RTyTlBDcRWQjVBUzdwZEJIRWs/edit?usp=docslist_api
In the zip is both Taskerless (apk via Tasker app factory) and NotTaskerless (project xml) flavors.
Obviously you loose Touchless Control based commands ("What's up?") by using this. Also, it adds half a second of delay between launch phrase and phone reaction, but a faster overall command. Haven't tested Assist yet, but I assume it will still work.
And again the OkGoogleNowTriggerIntent xposed module and Secure Settings are required for this to work.

[APP] [TOOL] Replay.it - Record and replay gestures and button presses (root)

HelloWorld, hope you guys are taking on the Corona crisis safely.
What started off as just means to kill time during the lockdown back here in India, eventually showcased some potential that deemed necessary an XDA post! So here it is, a practically end-to-end Android automation tool that can record and "replay" taps, swipes and programmatic button presses without breaking a sweat.
Check out this YouTube video that demos the project walks you through it with a simple yet cool scenario of automating sending messages on WhatsApp!
WHAT DOES IT NEED?
The whole thing comprises of an Android app as well as a Java command-line program that works across platforms - Windows, MacOS and Linux. It's super portable in that the app goes into your phone and the PC-side tool just needs Java SE installed. In addition, your Android phone must have root access, USB debugging enabled and a data cable for connecting it to your PC (oh yeah, with necessary drivers installed).
Note: If you're planning on just playing around to know how the whole concept of recording and replaying works, neither do you need the Android app nor do you need root. Skip to last section to know more.
WHAT CAN IT DO? - e.g. WhatsApp automation
Without sounding trite, lets understand what it does with just a simple scenario. If you were me, I'm sure it gets insipid sending quotidian morning wishes on WhatsApp, first thing waking up every morning. What if I tell you there's a tool that can watch you unlock your phone, open WhatsApp, open your desired chat, type out a message, send it, close the app and lock the phone all before you even wake up? Enter Replay.it. Jargons aside, with your phone connected to your PC via a data cable, this tool can record taps, swipes, button presses and common actions (both, as keycodes that you can check out here), have them sent over to your phone and have them "replayed" at a scheduled time as set in the Android app. Simply put, the PC-side tool is what watches you / records the way you accomplish a certain action (such as sending a message on WhatsApp) and the Android app is what schedules the replay. If you're in doubt, during replay it virtually looks as though someone invisible is at your phone tapping the screen, scrolling about and typing out text.
UNDER THE HOOD - It's open source
Clearly, the app requires root as injects touch, motion and key events on your behalf that isn't permitted with standard shell access. Technically, everything out here is ADB and you're free to view and contribute to the open source codebase that you can check out here.
FILTHY BUGS - You need to know
No piece of software ever, is bug free and just like that here's a little one. While taps are recorded fine, in order to log a swipe, you make the swipe gesture and once finished, you must tap somewhere on the screen. The trailing tap is what completes a swipe which if is not done, the entire swipe record just performed is lost.
WHERE CAN I GET IT? - Download & play
Okay, less of talk and more of work! Check out this GitHub repo for downloading the latest release and working with it. Finally, do check out the YouTube video demo and show some love!
Hope you guys find this tool useful.
Thanks and take care!

General Cover Screen Support Campaign

Samsung has done it again with the proprietary meta-data and added their own requirements on using apps on the cover screen, not unlike their silly tags for VR and multi-window.
Do you use an app that wouldn't cause any privacy concerns on the cover screen? Notify the developers.
Any app that wants to run on the cover screen can add the following tag to the AndroidManifest to enable cover screen support for a designated activity
Code:
<meta-data android:name="com.samsung.android.activity.showWhenLocked" android:value="true"/>
They will also need to implement some minor detection of the current display. This is done to remain exclusively compatible with the cover widget and avoid displaying their activity above the standard lock screen.
Code:
mDisplayListener = object : DisplayListener {
override fun onDisplayAdded(display: Int) {}
override fun onDisplayChanged(display: Int) {
if (display == 0)
// Disable display above the lock screen
else
// Enable display above the lock screen
// Technically irrelevant for this scenario
}
}
override fun onDisplayRemoved(display: Int) {}
}
val manager = context.getSystemService(Context.DISPLAY_SERVICE) as DisplayManager
manager.registerDisplayListener(mDisplayListener, Handler(Looper.getMainLooper()))
Display over the lock screen does not require "android.permission.DISABLE_KEYGUARD" because the lock screen remains enabled.
Code:
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.O_MR1) {
setShowWhenLocked(true)
setTurnScreenOn(true)
val keyguardManager = getSystemService(Context.KEYGUARD_SERVICE) as KeyguardManager
keyguardManager.requestDismissKeyguard(this, null)
} else {
this.window.addFlags(WindowManager.LayoutParams.FLAG_DISMISS_KEYGUARD or
WindowManager.LayoutParams.FLAG_SHOW_WHEN_LOCKED or
WindowManager.LayoutParams.FLAG_TURN_SCREEN_ON)
}
Android O - FLAG_SHOW_WHEN_LOCKED is deprecated
I'm targetting my application to Android O. In my application I have a job service that shows a window over all other applications, so when it triggered it needs to show this window even when the s...
stackoverflow.com
The actual code required to secure the phone would be the inverse of what is posted above, as the desired result is "normal" functionality.
Please feel free to direct any developers that want to add support and need help implementing it to this thread. I would be more than happy to help them get it up and running in an effort to expand support for cover screen compatibility.
Samsung has also limited the apps that can appear as cover screen widgets, but it will be even harder to impose these limits with more and more apps supporting cover screen functionality. Let's get Samsung to make the cover screen a feature and not a cheap novelty.
It's frustrating since even if they didn't want custom apps on the front, it would be so easy for Samsung to open up cover screen widgets to any app. Even if they didn't want homescreen widgets appearing, they allow 3rd party apps to use the cover screen widget category to add them. The settings app (well, the cover screen settings are handled by the AOD app, but it seems from a UI perspective that it's part of the settings app) even supports detecting custom widgets and showing them in the list, but if it detects that it's not a system app or not Samsung Health it logs an error and doesn't show it in the list. I'll keep an eye on the AOD app and hopefully something will change, even if it's Samsung adding another package name to the allow list for another official widget as it would be an alternative to uninstalling SHealth.
CarudiBu said:
but if it detects that it's not a system app or not Samsung Health it logs an error and doesn't show it in the list
Click to expand...
Click to collapse
Did you happen to keep a log of the error?
twistedumbrella said:
Did you happen to keep a log of the error?
Click to expand...
Click to collapse
Error probably wasn't quite the right word. It logs 'Disallowed Tile' for every widget it finds that doesn't meet the criteria.
For any developers / users questioning what is to stop someone from plugging in an external display on another phone that added support and exploiting it to get around the lock screen, Android does differentiate between hardware and virtual displays, so it is possible to limit compatibility further.
Just wanted to inform that when you connect Samsung Buds then one extra widget gets available on the cover display to control it. Maybe it can be used instead of the SHealth.
I want to help but I am not a dev so I will continue sharing any ideas I can come up with.
Samsung has a feature call Dual Messagers through which we can have 2 copies of whatsapp installed. And people are able to install 2 copies of other apps too using that. Can we have 2 SHealth the same way?
Here is the link I found on xda on that:
Add non-messenger apps to Samsung Dual Messenger?
So the creation of parallel workspaces is blocked in the Security Policy of 8.0, preventing multiple copies of the same app. However, Samsung's Dual Messenger still successfully copies apps it sees as messenger apps, and allows them to run just...
forum.xda-developers.com
tausift0 said:
Just wanted to inform that when you connect Samsung Buds then one extra widget gets available on the cover display to control it. Maybe it can be used instead of the SHealth.
Click to expand...
Click to collapse
That would also require having Galaxy Buds connected at least once to enable from the sound of it.
tausift0 said:
Just wanted to inform that when you connect Samsung Buds then one extra widget gets available on the cover display to control it. Maybe it can be used instead of the SHealth.
Click to expand...
Click to collapse
Unfortunately, the Galaxy Buds 2 Plugin package name can't be used to add a custom widget because the settings app checks whether the apps providing widgets are either system apps or signed with the platform key, with the exception of Samsung Health, before they are shown in the list.
The app only seems to work once. For instance if I open YouTube And use it And use it Then open the phone and close it again I can tap the app buttons but it does nothing and I have to restart the phone.

Categories

Resources