Dear guys,
My company Roamtouch has developed GestureKit SDK, a cross platform 2D gesture recognition tool with an online editor and plugin distribution system. Out current version works for cellphones and tablets. We believe the gesture input is pertinent and useful to operate a watch too.
I am looking for developers to work on porting our GestureKit plugin current version and create an Android Wear version in order to extend the API to wearable. In a second stage of development we need to adjust the plugin to work into a custom input type keyboard. To clarify, we are not aiming to develop an alphabet replacement of keys like Microsoft's Android Wear keyboard concept very well did, but on a command based customized and pre-defined gestures to trigger actions.
Why porting Wear? I believe there is a good opportunity to work with watches and gestures doing fast and efficient input. Screen size is tiny and you can perform them on the watch without having to look at it, making the interaction more productive. The proximity of the hand guarantees effective movements and commands. For these reasons, building a gesture solution for watches worth it.
If any developer is interested in the project please let me know.
Thanks very much for reading.
--
Jose Vigil
CEO & Founder
RoamTouch - GestureKit
Related
Hi folks,
I'm designing a keyboard application for the LEAP motion gesture recognition device, which ships in about 5 weeks, and since it looks like "table mode" (LEAP functioning in a horizontal orientation) won't be available for launch, I'm modifying the basic ASETNIOP concept (a chorded keyboard) substantially to work better as a "floating" keyboard. The new concept also works well with tablets, so I've decided that it could really benefit from the simultaneous launch of an Android application for smartphones and tablets. I don't think that it would be particularly complicated to code (I've already put together javascript demos without too much trouble), but since I'm wrapped up in my LEAP application, I won't be able to work on making an Android version. I'm looking for someone with the skills to develop the Android app; you'd be keeping the lion's share of the Android revenue plus a portion of the LEAP sales.
If you think you might be interested, drop me a line via the asetniop website and I'll send an NDA over so we can discuss the details of the concept a bit more (it's substantially different from ASETNIOP; the new concept is intended as a visual rather than a touch-based solution) and you can see if you'd like to get on board.
Thanks!
Is this useful? Maybe you can contact author if he is interested..
ASETNIOP said:
Hi folks,
I'm designing a keyboard application for the LEAP motion gesture recognition device, which ships in about 5 weeks, and since it looks like "table mode" (LEAP functioning in a horizontal orientation) won't be available for launch, I'm modifying the basic ASETNIOP concept (a chorded keyboard) substantially to work better as a "floating" keyboard. The new concept also works well with tablets, so I've decided that it could really benefit from the simultaneous launch of an Android application for smartphones and tablets. I don't think that it would be particularly complicated to code (I've already put together javascript demos without too much trouble), but since I'm wrapped up in my LEAP application, I won't be able to work on making an Android version. I'm looking for someone with the skills to develop the Android app; you'd be keeping the lion's share of the Android revenue plus a portion of the LEAP sales.
If you think you might be interested, drop me a line via the asetniop website and I'll send an NDA over so we can discuss the details of the concept a bit more (it's substantially different from ASETNIOP; the new concept is intended as a visual rather than a touch-based solution) and you can see if you'd like to get on board.
Thanks!
Click to expand...
Click to collapse
ASETNIOP said:
Hi folks,
I'm designing a keyboard application for the LEAP motion gesture recognition device, which ships in about 5 weeks, and since it looks like "table mode" (LEAP functioning in a horizontal orientation) won't be available for launch, I'm modifying the basic ASETNIOP concept (a chorded keyboard) substantially to work better as a "floating" keyboard. The new concept also works well with tablets, so I've decided that it could really benefit from the simultaneous launch of an Android application for smartphones and tablets. I don't think that it would be particularly complicated to code (I've already put together javascript demos without too much trouble), but since I'm wrapped up in my LEAP application, I won't be able to work on making an Android version. I'm looking for someone with the skills to develop the Android app; you'd be keeping the lion's share of the Android revenue plus a portion of the LEAP sales.
If you think you might be interested, drop me a line via the asetniop website and I'll send an NDA over so we can discuss the details of the concept a bit more (it's substantially different from ASETNIOP; the new concept is intended as a visual rather than a touch-based solution) and you can see if you'd like to get on board.
Thanks!
Click to expand...
Click to collapse
Please refer all job offers to the job board.
Thanks
If you could use gestures in your apps: In what app you would use?
I'm working in a project about use gestures in apps (android, ios and web) with a simple-line code. So, i want implement this development in differents areas, but i need some suggestions.
My project is a software development kit to create gesture-driven apps across all platforms using an online editor and a plugin distribution system. Whether it’s a small or big app, whether it’s on iOS, Android or web, it enables developers and designers to effortlessly integrate gesture control into their applications and to create innovative user experience.
I have a music player with this project, if you want to see it, i can give you the link to download it and try!
Well, my question is: Where you think that this project will have potencial? Yes or No? And in which area?
Can you understand me?
Thanks everyone for the help!
Best Regards!
I'm new to the Tizen world of development. From what I've been hearing, Tizen is so difficult to program for that it veers of your average app developers. Although I'm not one to turn my back on a challenge, it's hard to get some developers to take a serious & practical look at the realm of possibilities of currently unique tech like this.
There are massive notes & flow charts of practical applications for the Samsung GS2 I've created. To have something like the rotating bezel & touchscreen w/two buttons ON YOUR WRIST is device from heaven. Specially if one makes tethered remote access apps between the GS2 & corresponding cellular phone and/or tablet to control and manipulate other devices the GS2 may not be able to directly connect to. The possibilities are phenomenal.
What do developers think about the time and effort in producing a solid app foundation for Tizen's GS2 market? Even if it means massive collaborations and the drops of egos that us developers have from time to time, the payoff may open doors to greater engineering feats. I love to be on the front lines of progression, paving the way for progressive engineering and inspiring engineers to step out & ACT on their version of visions for tomorrow.
The Tizen SDK is buggy and difficult to get all components installed and playing nicely and Tizen is a little harder to code for than Android. I'm still learning the UI code and overall application structure, but slowly getting there.
I do wish more developers would see the potential market and code for it as I see a whole plethora of possibilities, but very few developers. I'm aiming to get my first app complete and to the Gear store in a month or so. I'll gladly share my experiences here for other potential developers, so they don't make the same mistakes or can learn from my experience.
Oobly said:
The Tizen SDK is buggy and difficult to get all components installed and playing nicely and Tizen is a little harder to code for than Android. I'm still learning the UI code and overall application structure, but slowly getting there.
I do wish more developers would see the potential market and code for it as I see a whole plethora of possibilities, but very few developers. I'm aiming to get my first app complete and to the Gear store in a month or so. I'll gladly share my experiences here for other potential developers, so they don't make the same mistakes or can learn from my experience.
Click to expand...
Click to collapse
I am interested In learning more about it personally, I am bookish but I'm motivated and I'll do everything I can to learn what's necessary
GOIGIG said:
I am interested In learning more about it personally, I am bookish but I'm motivated and I'll do everything I can to learn what's necessary
Click to expand...
Click to collapse
The main thing is to first install the latest Java JDK, not just the JRE, but the full JDK and make sure the environment variables are set correctly. Then install the Tizen SDK and run the update manager. You need to install the certificate and wearable extensions from "Extras", the emulator from "Tizen Tools" and also the relevant tools from the "Wearable 2.3.1" group. Then you can start the IDE (a version of Eclipse) and select a simple example (choose ), try to compile it and run it with the emulator. You need to start the emulator and make sure it's in the "connected devices" area before running the app.
Be aware that the emulator uses a lot of processing power and can run slowly.
There are a number of different types of app you can build for the S2, native or web with different UI components / frameworks.
A good starting point: http://developer.samsung.com/gear
If you want to test your app on your actual S2, this is a great guide: http://www.tizenexperts.com/2015/12/how-to-deploy-to-gear-s2-smartwatch/
If you generate an author certificate, you can use the same one for the GearWatchDesigner, but that app has different Java requirements (32-bit JRE only required).
Focus motion
Oobly said:
The Tizen SDK is buggy and difficult to get all components installed and playing nicely and Tizen is a little harder to code for than Android. I'm still learning the UI code and overall application structure, but slowly getting there.
I do wish more developers would see the potential market and code for it as I see a whole plethora of possibilities, but very few developers. I'm aiming to get my first app complete and to the Gear store in a month or so. I'll gladly share my experiences here for other potential developers, so they don't make the same mistakes or can learn from my experience.
Click to expand...
Click to collapse
hi, there is a free sdk from a company called focus motion , which allows auto recognize the movements made with the smart watch .
Someone would be able to make a test app for samsung gear s2 ?
i don't think so
codenameclass5 said:
I'm new to the Tizen world of development. From what I've been hearing, Tizen is so difficult to program for that it veers of your average app developers. Although I'm not one to turn my back on a challenge, it's hard to get some developers to take a serious & practical look at the realm of possibilities of currently unique tech like this.
There are massive notes & flow charts of practical applications for the Samsung GS2 I've created. To have something like the rotating bezel & touchscreen w/two buttons ON YOUR WRIST is device from heaven. Specially if one makes tethered remote access apps between the GS2 & corresponding cellular phone and/or tablet to control and manipulate other devices the GS2 may not be able to directly connect to. The possibilities are phenomenal.
What do developers think about the time and effort in producing a solid app foundation for Tizen's GS2 market? Even if it means massive collaborations and the drops of egos that us developers have from time to time, the payoff may open doors to greater engineering feats. I love to be on the front lines of progression, paving the way for progressive engineering and inspiring engineers to step out & ACT on their version of visions for tomorrow.
Click to expand...
Click to collapse
i don't think so! tizen very easy to dev
Some help
Hey guys,
I'm actually currently building an Android app to work with the Gear S2 based on the Integrated App model.
But I'm having some issues, as soon as I build my APK and deploy it in debug mode on the mobile phone, the OS immediately says there is no Samsung Gear app and uninstalls the APK.
Does anyone know how to get passed this?
Hi everyone. I know that it's not a new subject, but I think it's a good subject to discuss about. It's the first question for any developer who wants to start a new project as everyday we can see new frameworks and languages comming up for development. After years of development now when I want to start a new project think about it again. It will be great if any one who has any experience can share it here. Because it's the experiences and times that show us if a decision was a good one or not. We can write about different parameters we considered to start a project, challenges and the results. Please feel free to add any other item you think is important to consider.
I start by myself:
Subject: A Network Communication App
Long/Short term consideration: Long term
Target platforms considered at start: android, windows
Target platforms implemented: after 2 years, android, windows, linux, (ios just newly started)
Framework: Qt5 (a cross platform framework)
Challenges: We used a crossplatform framework but as you know any OS has it's considerations and styles for development. Also different frameworks have different capacities for handling these features. In this project one of the challenges we had was the way android handles services and activities. We had to separate our UI from the logic controllers completely and implement a way to communicate between these separated processes. Also we had to implement the OS based features separately for each OS like notifications and alarms. Fortunately Qt let you use native codes for these specific features. For example you can use native java codes for showing a notification or playing system native sounds and alarms.
Pros: We implemented our UI just once and used it for all platforms. Also we implemented network and controller threads once and used them for all platforms. In this way if we need any change in our protocols or UI, we just develop them once. So we have one development team for all of them.
Result: Cross Platform with Qt was a good decision at that time for that project. Specially for a small team like what we had, because after you implement the base of a project like this you have to support it for a long time and add features to it. If we were using native codes, now we had to have separated teams for developing new features and supporting it.
It's great to read about other people stories. So Please let us know about them.
Thanks
Native vs. Cross-Platform
Native apps are developed only for a specific platform. These apps are formed in a language cooperative with the platform. Apple, for instance, prefers Objective C and Swift for iOS while Google supports Java for Android. Using these satisfactory languages, developers can create safer use of the inherent features of these platforms. A native app developed for Android will not function on iOS and vice versa.
Cross-platform apps are cooperative with various platforms. Due to the market share of Android and iOS, most cross-platform apps are confined to these two operating systems. These apps are produced in HTML and CSS since these official web technologies are platform-independent. Several cross-platform application development tools enable developers to create these apps with little trouble.
I've been working on my own assistant application framework for some time now, and I am coming up to a point where it is functional for an alpha release. There aren't really any other FOSS assistants on the market other than Mycroft, and I noticed that there is no development happening on Saiy/Utter!.
I've been developing it heavily using a Unix mentality which is meant to reduce the mental overhead when it comes to creating skills or new/replacement modules. I paid a lot of attention to the development of the framework so that individual components can be developed or replaced independently, allowing it to be more of a platform than a standalone application. This should also allow it to be easier to dive into individual parts of the application.
There is still a lot to go in terms of making it useful out of the box, but it's almost all there in back end, and I think I'm finishing up the concrete features and flags that it needs to operate with skills and modules that other users develop.
As it is right now, it does offline speech recognition using Vosk STT, and intent matching/entity extraction using the Stanford Core NLP library. I have it set up with a mock Calendar Skill to test its matching and finalize how I want it to interface with complex tasks. Currently it *WILL NOT COMPILE OR WORK* since I am still working out bugs on the alpha. When I am ready to release an actual alpha I'll branch the code, and I'll post/host nightlys somewhere (maybe also put it on F-Droid and Google Play).
I intend to interface it with Termux/Tasker, Google Assistant, Alexa, and Mycroft, as well as at a chatbot feature, but those are all secondary to the task of a stable working assistant/platform. I encourage feedback and questions about how it works and how it could be hacked on to do other things, so that I can write documentation that is as transparent and understandable as possible. Hopefully the code is a bit self documenting as well. I strive for readability over cunning.
Here's the link: https://github.com/Tadashi-Hikari/Sapphire-Assistant-Framework
Let me know what you think