When you hand your phone to granny to take a photo of you, can she get the job done? Rate this thread to express how you deem the Elephone S8's camera software. A higher rating indicates that the software is easy to use, fast, uncluttered, and inclusive of advanced features for when you need them.
Then, drop a comment if you have anything to add!
the native camera software is easy to use and has all usual features like other smartphones. You can use 3rd party software, but only the native software supports the font flashlight for selfies.
It's the old stock Mediatek camera app. Ugly, but it works to take pics.
Related
Recently I started a study on long term context of a user. For that I concentrate on android Wear specifically; I am looking into heart rate sensors. My goal is to map the users heart rate over a extended period. After collection I would then focus on error correction and data reduction, resulting in an analysis specific to the user.
In order to make this possible I would need 'continuous readings from a smart wearable'; meaning that i need at least a heart rate reading every 5-10 minutes. Since I am new to the smart watch/wearables I thought I'd ask a few questions here.
1. I am looking into the Huawei watch, but this model does not offer an always on heart rate sensor. Would it be possible to produce an app that measures a pulse every 5 minutes and sends that data to an app on a phone?
2. I've read that the first gen moto 360's readings were awful, but that the second gen has significantly improved on bringing accurate readings (every 5 minutes which is extremely useful for my research). What is your opinion on measuring heart rate data with a smart watch. Is it mature enough to use?
3. Lastly, I am looking into all sorts of wearables. For example Fitbit, polar etc for more enthusiastic users. But the internet is huge, and I am looking for some opinions on what platform to choose. Currently I am looking into Fitbit, Polar (with a personal app for this project), and an android wear device (currently moto 360) If you have an opinion or a suggestion please do so in the following format: Model + brand ; programming interface (how easy is it to retrieve data) ; strong positives and negatives of the device.
Keep in mind I need to access the heart rate data. So if there isn't a platform available to receive the data besides a native app It won't be of use.
Thank you for reading! Looking forward to the comments! :laugh::highfive:
Shame on Google and Qualcomm for conspiring to keep advanced image processing features from other manufacturers.
If it were not for Ivanich discovering Google's hidden code that takes advantage of special processing built into certain Qualcomm processors, I would still be thinking that my OnePlus 3T camera was vastly inferior.
And it's not just a OnePlus thing. A company with the size and resources of LG should have been able to come up with their on version of "HDR+" for the G6, but their users are scrambling for modded Google HDR+ APKs to improve their images.)
LGs optics have often been superior, but without the secret processing sauce held by Google and Qualcomm they are always a step behind. Imagine if the G5 had had HDR+ processing. It's built into the 820. The G5's images would have trounced the Pixel's, and perhaps LG would be in a stronger position than it is in today.
I thought Android was open source. Just like Google makes its core email, messaging, calendar, calculator, and photo apps available to OEM's, it should make its camera app available with all CPU/Chipset specific enhancements too.
OEM's should take Google and Qualcomm to task for this apparent conspiracy.
Imagine if OnePlus, LG, Xiaomi, and other manufacturers knew about Qualcomms special image processing capabilities and had access to Google's code. Their stock camera apps would take much better pictures and would have weakened the Pixel's most significant apparent advantage, its camera.
OEMs, I implore you to get Google to release their image processing code to you so you can incorporate it into future updates to improve your stock camera apps. OEMs, also take Qualcomm to task. It is unethical for Qualcomm to only share special CPU and chipset features with only certain customers. It gives these customers an unfair competitive advantage
OEMs, you should fighting to retain BSG and Ivanich as consultants for your future camera development.
Visual aesthetics has been shown to critically affect a variety of constructs such as perceived usability, satisfaction, and pleasure. However, visual aesthetics is also a subjective concept and therefore, presents its unique challenges in training a machine learning algorithm to learn such subjectiveness.
Given the importance of visual aesthetics in human-computer interaction, it is vital that machines adequately assess the concept of visual aesthetics. Machine learning, especially deep learning techniques have already shown great promise on tasks with well-defined goals such as identifying objects in images or translating from one language to another. However, quantification of image aesthetics has been one of the most persistent problems in image processing and computer vision.
We decided to build a deep learning system that can automatically analyze and score an image for aesthetic quality with high accuracy. Please check out our demo to check your photo’s aesthetic score.
About the Research
We came up with a novel Deep Convolutional Neural Network which can be trained to recognize an image’s aesthetic quality. We also came up with multiple hacks while training the algorithm to increase the accuracy.
In our paper published on arxiv, we have proposed a new neural network architecture which can model the data efficiently by taking both low level and high-level features into account. It is a variant of DenseNets which has a skip connection at the end of every dense block. Besides this, we also propose training techniques that can increase the accuracy with which the algorithm trains. These methods are to train on LAB color space and to use similar images in a minibatch to train the algorithm, which we call coherent learning. Using these techniques, we get an accuracy of 78.7% of the AVA2 dataset. The state of the art accuracy on the AVA2 dataset is 85.6% which uses a deep Convolutional Neural Network with pretrained weights on the imagenet dataset. The best accuracy on the AVA2 dataset using handcrafted features is 68.55%. We also show that adding more data to our training set (from AVA dataset not included in AVA2) increases its accuracy to 81.48% on AVA2 Test Set, hence showing the model gets better with more data.
Use-cases of Visual Aesthetics
App developers of social media sites can help their users decide which photo will suit best for their profile image. We all have faced anxiety while uploading photos on social media sites or changing our display pic. With our API integration, app developers can help their users look good, always!
Smart Machine Learning algorithms can help you put your best photo on dating apps
Ok, now this use-case may not appeal to the zen, non-materialistic folks among us but to be honest, dating leads to the most social anxiety. Dating landscape keeps changing as well and therefore, if you are active on dating apps, it’s important to choose your best photos to improve your chances for right swipes!
Dating App developers can easily integrate our APIs to help their users upload their best photos; the visual aesthetics model can also be fine-tuned if the developers want to optimise it on their data set.
Recently Google has launched Pixel 2 and Pixel 2 XL which has a portrait mode. This phone offers the portrait mode even though it lacks the second lens that many other phones have. For example, the iPhone X, Galaxy Note 8, OnePlus 5… all these phones offer the portrait mode because they use data from two lenses. One lens captures the image, the other one captures the depth information, apart from providing some focal range magic for the blurred background. However, Pixel phone uses AI to give HDR+ images to users which are comparable to pictures clicked by a DSLR camera.
Similarly, mobile manufacturers can augment the capabilities of their native camera by integrating the visual aesthetic APIs to let their users know in real-time the quality of their photo even before taking a snap! This will enable your users to share their photos with confidence and you will end up creating a great differentiator for your brand at no additional hardware cost.
Virality in online content
visual aesthetics
Content is king, and it has become ever more difficult to write compelling content that resonates with your audience. However, the best content these days often have great images to complement them, and therefore, you’ve got to include something that will keep eyes moving down the page.
BuzzSumo did an analysis that covered over 1 million articles and found that the ones that had images every 75-100 words had more social shares. Using our visual aesthetics tool, you can quickly check how appealing your images are and accordingly, improve the virality of your blog post.
In this blog post, we have covered some of the use-cases of our visual aesthetics API. When machines become more competent than humans to judge such subjective content, it opens up a lot of possibilities to exploit which were not feasible yet. You can read more blogs on Visual Analytics here.
ParallelDots AI APIs are a deep learning powered web service by ParallelDots Inc, that can comprehend a huge amount of unstructured text and visual content to empower your products. You can check out some of our Visual Analytics APIs and write to us at [email protected].
Hi all, I'm just trying to connect the dots on what our phones are capable of, and start a discussion on how well they're going to work with AI and Neural Processing becoming more prominent.
Hardware: Our X4 have the Snapdragon 630, which actually includes the "Snapdragon Neural Processing Engine".
As far as I can tell, the only application that actually uses this chip is a cheezy Lenovo app that scans photos of landmarks, and gives you information about it. Really pointless, if you ask me. There may be other applications that are using this, but I'm not sure?
Android 8.1: 8.1 introduced the Neural Network API v1.0, which I understand to make it easier for other apps to incorporate local, on-device Neural Processing.
Android P: Google just had their I/O 2018, and a lot of their talk about Android P was focused around AI, such as using AI to improve battery life or improve the adaptive brightness, and more.
Android P also upped their NNAPI to v1.1. Should make for more powerful apps in the future(?).
Camera: On an unrelated note, I saw this article a few days ago: Machine Learning can now edit a photo with near-pitch-black lighting into a usable photo. Go read the article, watch the video, and then go to their website and check out their examples. It's impressive to say the least. I don't know how expansive or heavy-hitting their software is, but if there's a glimmer of hope to one day have that kind of magic run on the Neural Processing Engine on our phones, and allow us to take drastically-low-light photos that are actually usable, that would be an amazing breakthrough! It would be the single greatest leap in cell phone cameras we've ever had.
Google: I'm guessing that Google is already using the Neural Processing Engine in some of their apps, such as google assistant maybe? How do I tell?
Questions: Is there a list of apps that can use Neural Processing, or does it happen in the background? Is there a way to tell on the phone when that chip is being utilized? What kind of apps either currently use or will use the NNAPI and our local hardware? What kind of features will this unlock for us in future versions of Android?
I'm posing these questions to the community, and I'll also be trying to find more information, but if there's someone who already knows more than I do, I'd love to hear from you. Either way, it's nice to know that while AI and Neural Processing is the way of the future, our phones have the hardware to keep up and remain relevant. :good:
subbed for any other's input that can share intelligent insight (not me on the subject matter but i love reading about AI and machine learning)
Linus just did a video on the new Honor 10 that's touting the Neural Processing as a big feature. Unfortunately for us, their implementation seems actually useful, touting better dual-camera processing for people, and intelligent/dynamic editing of photos depending on what's in it. Still seems a bit basic (borderline gimmicky), but it seems better than Lenovo/Moto's geo-location search thing.
https://www.youtube.com/watch?v=FGIUl9i_Oyo
Upscale and Enhance Your Images for FREE
Hello
I am very excited to introduce our new AI-powered product Upscale.media
Our AI-powered solution helps users upscale and enhance their images for FREE . We faced the problem of low-quality photos that couldn’t be used anywhere and tried to solve it with AI-ML
During the past 1 year, Our tech team has done extensive research into preserving detailed textures in images and processed more than 10 million photos - for business and personal use. Today I’m extremely excited to introduce our image upscaler tool.
Here are a few highlights of Upscale.media:
It's 100% Free to use and forever will be.
Get higher-quality images, Increase resolution up to 4x.
Color enhancement. Automatically adjust brightness and saturation.
Improved accuracy. Smarter upscaling with JPEG artifact removal.
Remove JPEG artifacts
No pixelation or blurry details.
We studied many personal & business use cases to solve their problems with this tool:
For E-commerce - Increase conversion rates of your e-commerce website with sharp and clear images.
Art - Upscale your digital art or NFTs while maintaining all the high-quality details
Real estate - Give your clients the best customer experience with high-quality, attractive property photos.
Individuals - Enhancement was never this easy but with the help of AI, it can be done in a few seconds.
Download:
Get it on Google Play
Get on the App Store
Thanks for your support! Hope you enjoy using our tool! Please, let us know your feedback in this thread directly
Also, you can send any suggestions to our support email: [email protected].
"It's 100% Free to use and forever will be." - it is untrue statement, in the light of your T&C's:
9. 1. You grant to us a royalty-free, perpetual, irrevocable, non-exclusive right and license to adopt, publish, reproduce, disseminate, transmit, distribute, copy, use, create derivative works from, display worldwide, or act on any material posted by you on the Platform without additional approval or consideration in any form, media, or technology now known or later developed, for the full term of any rights that may exist in such content and you waive any claim overall feedback, comments, ideas or suggestions or any other content provided through or on the Platform. You agree to perform all further acts necessary to perfect any of the above rights granted by you to us, including the execution of deeds and documents, at your request.
Effectively, you are processing users pictures for obtaining the full copyrights to these.
Personally, I believe that cheating on your potential users is not a very good start but of course this is up to you how you want to prsent ypourself to the community at XDA.
spamtrash said:
"It's 100% Free to use and forever will be." - it is untrue statement, in the light of your T&C's:
9. 1. You grant to us a royalty-free, perpetual, irrevocable, non-exclusive right and license to adopt, publish, reproduce, disseminate, transmit, distribute, copy, use, create derivative works from, display worldwide, or act on any material posted by you on the Platform without additional approval or consideration in any form, media, or technology now known or later developed, for the full term of any rights that may exist in such content and you waive any claim overall feedback, comments, ideas or suggestions or any other content provided through or on the Platform. You agree to perform all further acts necessary to perfect any of the above rights granted by you to us, including the execution of deeds and documents, at your request.
Effectively, you are processing users pictures for obtaining the full copyrights to these.
Personally, I believe that cheating on your potential users is not a very good start but of course this is up to you how you want to present yourself to the community at XDA.
Click to expand...
Click to collapse
I applaud you for your patience in reading the license & usage agreement on behalf of XDA users. This type of "ownership" over content is what should scare a lot of users, infact should scare all people.
They effectively own your art, photography, and designs... Anything you submit is now theirs to sell & use.
That's why I stick to Topaz Labs (gigapixel AI, video enhancer AI, ect.)
I do not work for any of these or in this industry. But I can tell you they do all the processing OFFLINE. And claim no ownership of your content.
The AI neural models are online-downloaded, stored offline for convenience, & privacy reasons.
This type of ownership what a ton of Chinese - based apps lately release try to do. They effectively collect and store EVERY photo you upload in exchange for a crappy upscale (in most cases). The Chinese gov now owns them; so think twice where you upload.