XPERIA XZ [Ir infrared] Can use in remote control Apps??? - Sony Xperia XZ Questions & Answers

HI All I Have Question About Infrared Module in Sony Xperia XZ
Maximising Sony’s acclaimed image sensor, two additional assisting sensors have been added to become Sony’s triple image sensing technology. This allows you to capture beautiful images in motion with true to life colours in virtually any conditions. The technology is comprised of Sony’s original Exmor RS™ for mobile image sensor which provides a powerful blend of high quality image and autofocus (AF) speed combined with Predictive Hybrid AF to intelligently predict and track subjects in motion for blur-free results. Added to this is the Laser AF sensor with distance sensing technology, which captures beautiful blur-free photos in challenging low light conditions. And what’s more, you will enjoy superb true to life colours thanks to the RGBC-IR sensor with colour sensing technology which accurately adjusts the white balance based on the light source in the environment.
Click to expand...
Click to collapse
According to the This Quoted
I have an idea
I hope we can use the IR SENSOR as remote control IR Sender
Any suggestions can be reviewed
Tnx

Reserve...

It doesn't have a ir on it
---------- Post added at 04:42 PM ---------- Previous post was at 04:41 PM ----------
Google phones with ir you can see a list of all the phones that you can use

Related

[Q] camera

what is zsd and hdr function in camera
How can I reduce the gps lock time on my micromax A110 pls help new to this superb phone
ZSD has been provided for assistance, which generally stabilize the shot taken by the camera
HDR - High-dynamic-range imaging (HDRI or HDR) is a set of methods used in imaging and photography to capture a greater dynamic range between the lightest and darkest areas of an image than current standard digital imaging methods or photographic methods. HDR images can represent more accurately the range of intensity levels found in real scenes, from direct sunlight to faint starlight, and is often captured by way of a plurality of differently exposed pictures of the same subject matter ...... hit thanks if i helped u
Sent from my GT-I8552 using xda app-developers app

Advanced Camera app using dual pixel sensors

As many of us know that Google pixel 2 camera uses the dual pixel sensor ability for fast Autofocus as well as to get depth of field. Again face and body recognition algorithms are used to get dslr like portrait not using the dual camera but the same single camera. Dual pixels help in separating the person in focus from background, resulting amazing portraits..
Can the same features be implemented in a camera app that can utilise dual pixel sensors of phones like s7, note 8, x4, g5 plus and others to achieve similar results?
A new app may be.. Or we are waiting to port the pixel 2's camera app?
P. S. Please don't limit to sd 820,830 processors like the Google camera mod for hdr+ app. :laugh:
Here's an extract from petapixel post for better understanding of the new and the best ever Google Pixel camera:
Each pixel on the sensor has a “left and right split,” something that gives the sensor greater capabilities for depth of field and autofocus. This means that the camera’s sensor has two images, from slightly different perspectives, of the world in front of it. Consequently, the Pixel 2 can create a depth map and allow for shallow depth of field effects in its “Portrait Mode.”
Then there’s HDR+, which uses an algorithm that allows the tiny sensor to “act like a really big one,” introducing greater dynamic range. It combines several photos together with different exposures like a standard HDR image, but HDR+ also looks to realign each frame to avoid ghosting.
YouTube video "youtu.be/PIbeiddq_CQ"

How the Kirin 970 uses Handheld Super Night Mode to Take Better Photos at Night #ad

How the Kirin 970 uses Handheld Super Night Mode to Take Better Photos at Night
When it comes to smartphone photography, the most challenging shots are always going to be night shots. Situations with limited light most often result is grainy unusable photos for devices with weaker cameras. The Kirin 970’s AI chip helps to solve this issue with “Handheld Super Night Mode”.
One way to achieve better night shots is to set your phone on a tripod and let your camera use a longer exposure and higher ISO. This is a bit inconvenient as most people obviously wont be walking around with tripods. To solve this issue, Honor uses the Kirin 970 to add “Handheld Super Night Mode” to their phones. This mode lets you take better night shots without having to setup any equipment.
Handheld Super Night Mode works by using powerful AI algorithms, and the quick processing ability of its Kirin 970. There are several techniques used to enhance your night time photos.
AI Detection of Handheld State
One of the key factors of Handheld Super Night Mode is how the phone uses the AI chipset to detect any hand-held jitter of the phone. To realize accurate and efficient detection, the AI system collected and analyzed tens of thousands of data records reflecting different types of photographers and their camera and tripod usage methods, designing a machine learning logic to understand their habits. As a result of implementing this massive amount of data, the Kirin 970 is able to detect when Handheld super night mode is needed in 0.2 seconds. Using this data, the average users is now able to take better night shots without having to use a tripod.
AI Photometric Measurement
The AI photometric measurement system controls the camera’s light intake. After you tap the shutter button, The AI will automatically set the exposure and number of frames based on the lighting scenario, brightness of the preview image, distribution of light sources, and jitter.
AI Image Stabilization
After all of your frames are captured from your night shot, they are merged into a single image. It is common that surring this process, night shots often turn out blurry. To avoid this, before the synthesizing process takes place, the AI the clearest frames and discards any of the bad ones. The clearest frames are used as the standard for the image, while the other frames that the AI has not discarded are automatically aligned. The AI-powered Kirin 970 chip detects feature points within each frame, matching these points and aligning them to to produce the cleanest image possible.
Image synthesis
The final step in Super Night Mode is image synthesis. For this step, customized algorithms have been computed for the AI system to increase the number of short-exposure frames in bright areas to avoid overexposure and the number of long-exposure frames in dark areas to improve detail retention. Frame differences are detected pixel by pixel. If differences are large, AI determines that alignment failed around the edges and conducts correction and repair to ensure the edge regions are still crisp and sharp enough after synthesis. Noise reduction is performed on multiple frames, thereby improving the image’s signal-to-noise ratio, and achieving a clearer, cleaner, and brighter night shot.
Photos Taken on the Honor 10
These photos were taken on the Honor 10, with the Kirin 970 AI chipset using Super Night Mode.
IMGUR Album
We thank Honor for sponsoring this post. Our sponsors help us pay for the many costs associated with running XDA, including server costs, full time developers, news writers, and much more. While you might see sponsored content (which will always be labeled as such) alongside Portal content, the Portal team is in no way responsible for these posts. Sponsored content, advertising and XDA Depot are managed by a separate team entirely. XDA will never compromise its journalistic integrity by accepting money to write favorably about a company, or alter our opinions or views in any way. Our opinion cannot be bought.

How the Kirin 970 uses AI to Take Better Photos at Night

When it comes to smartphone photography, the most challenging shots are always going to be night shots. Situations with limited light most often result is grainy unusable photos for devices with weaker cameras. The Kirin 970’s AI chip helps to solve this issue with “Handheld Super Night Mode”.
One way to achieve better night shots is to set your phone on a tripod and let your camera use a longer exposure and higher ISO. This is a bit inconvenient as most people obviously wont be walking around with tripods. To solve this issue, Honor uses the Kirin 970 to add “Handheld Super Night Mode” to their phones. This mode lets you take better night shots without having to setup any equipment.
Handheld Super Night Mode works by using powerful AI algorithms, and the quick processing ability of its Kirin 970. There are several techniques used to enhance your night time photos.
AI Detection of Handheld State
One of the key factors of Handheld Super Night Mode is how the phone uses the AI chipset to detect any hand-held jitter of the phone. To realize accurate and efficient detection, the AI system collected and analyzed tens of thousands of data records reflecting different types of photographers and their camera and tripod usage methods, designing a machine learning logic to understand their habits. As a result of implementing this massive amount of data, the Kirin 970 is able to detect when Handheld super night mode is needed in 0.2 seconds. Using this data, the average users is now able to take better night shots without having to use a tripod.
AI Photometric Measurement
The AI photometric measurement system controls the camera’s light intake. After you tap the shutter button, The AI will automatically set the exposure and number of frames based on the lighting scenario, brightness of the preview image, distribution of light sources, and jitter.
AI Image Stabilization
After all of your frames are captured from your night shot, they are merged into a single image. It is common that surring this process, night shots often turn out blurry. To avoid this, before the synthesizing process takes place, the AI the clearest frames and discards any of the bad ones. The clearest frames are used as the standard for the image, while the other frames that the AI has not discarded are automatically aligned. The AI-powered Kirin 970 chip detects feature points within each frame, matching these points and aligning them to to produce the cleanest image possible.
Image synthesis
The final step in Super Night Mode is image synthesis. For this step, customized algorithms have been computed for the AI system to increase the number of short-exposure frames in bright areas to avoid overexposure and the number of long-exposure frames in dark areas to improve detail retention. Frame differences are detected pixel by pixel. If differences are large, AI determines that alignment failed around the edges and conducts correction and repair to ensure the edge regions are still crisp and sharp enough after synthesis. Noise reduction is performed on multiple frames, thereby improving the image’s signal-to-noise ratio, and achieving a clearer, cleaner, and brighter night shot.
Check out photo samples using walking night mode here.

Can an additional camera sensor attached via USB-C be read by apps via current APIs?

ok... so totally crazy idea. Hypothetically speaking, what if there is a way to attach a much larger camera sensor (eg. APS-C size with e-mount) onto a fast SD865 phone (or future) via usb-c? And then would it be possible to have camera apps read data from it via current APIs? obviously there are alot of steps i'm missing here, but the biggest weakness in phone cameras is the sensor and there is simply no physical way to put an APS-C sized one in. The lenses would be humongous.
That said, at SOME times, especially those who are more serious into photography, being able to attach your phone to a big sensor would give you superior gear than anything that exist right now. combining the existing computation techiques with a fast processor WITH a large sensor does not exist. it's one or the other, no one has tried to do both yet (there are some old ones like Samsung Galaxy NX and currently Zeiss, but it doesnt seem like they are going to take advantage of computational tech)
Benefits:
- superior HDR with AI (most DSLRs have multi-stacked HDR but they are not as advanced as Google's)
- potentially AI HDR in video footage (no DSLRs have this , done in post)
- Enhanced artificial bokeh on top of already good bokeh to simulate medium format look
- immediate access to mobile lightroom / sharing direct to sources
- all media creation/library in one source
- utilize superior EIS to have stable footage (again, no DSLR has any good EIS tech. more focus in IBIS and OIS which is beneficial only in photos). ever tried S20 or iphone 11 pro at night? it's a noise party. In this case it would be a clean 4k footage with gimbal like EIS
it's true alot of the above can be done in POST when shooting with large sensor cameras
thoughts?

Categories

Resources