Is it possible to make an average smartphone's camera sensor (...) - General Questions and Answers

(...) capture visual data at especially high framerates if very little resolution is needed?
I know it's not exactly Android-related, but I had an interesting idea that requires video capture at framerates of probably several thousand images a second. The resolution can probably be kept at just a few hundreds of pixels, so my question is whether such implementation is possible (at least on some smartphones) hardware-wise?
I'm not really sure how the CMOS sensor and data processing work. I know it scans the pixels in a "rolling shutter" manner, which perhaps can impede feasibility, but my Xiaomi (in auto mode where it decides this value on its own) can capture a single image in daylight at tens-of-thousandth of a second, so the actual minimal scanning time of the entire CMOS sensor seems to be short enough. The question is whether there's a minimum pixel "reset time" between frames that can't allow it to capture a sort of "super fast video" even if a very low resolution is being used? Also, I'm not sure it matters, but the video doesn't really need to be "saved", it just needs to provide a live visual cue for my hypothetical app.
Thanks in advance!

Related

Why does the video record so close?

Am I the only one that notices that you can be taking pictures and the as soon as you switch to video mode the video gets in real close and almost fills in the whole screen. This is crazy and annoying. Is there any way to turn this off? Maybe a setting that fixed this issue?
No, this is a hardware issue.
The answer is in this post:
http://forums.androidcentral.com/samsung-galaxy-s7/686490-does-anyone-use-video-stabilization.html
So what we need at least is a frame on screen showing us the exact video frame that will be recorded. Only that way we can aim correctly before pushing the record button.
Bright.Light said:
So what we need at least is a frame on screen showing us the exact video frame that will be recorded. Only that way we can aim correctly before pushing the record button.
Click to expand...
Click to collapse
Not sure if you carefully read the information from the link I posted. You can already achieve that now.
ssj100 said:
Not sure if you carefully read the information from the link I posted. You can already achieve that now.
Click to expand...
Click to collapse
I did read the ' answer' carefully, but setting the camera to 16:9 is unacceptable and definately not what I meant.
I just mean that I want a (colored?) frame of 16:9 on the display as guideline to show exactly what I will record when I start recording.
Bright.Light said:
I did read the ' answer' carefully, but setting the camera to 16:9 is unacceptable and definately not what I meant.
Click to expand...
Click to collapse
Why is it unacceptable? You can still choose to take photos in 4:3. Whenever you want to record video, you have to switch it to 16:9 if you don't want the zooming effect (if you've set it at 4:3, then the phone automatically records in 16:9, hence the zoom) - the phone can only record video in 16:9 aspect ratio, so that's by design. The same goes with other flagship phones like the Nexus 6P and iPhone 6.
And by the way, taking photos in 16:9 gives exactly the same quality as 4:3. The only difference is there is less field of view with 16:9, relatively. Personally, I just set the camera at 16:9 by default. If I really require more field of view (rare instances for me), it's not hard to tap the phone twice to select the 4:3 setting. And because 16:9 is default for me, I don't have to manually change it if I want to record video accurately (without the zooming) etc. It suits me nicely, as I often record video. Furthermore, 16:9 photos take up the full screen on the actual phone, laptop, PC and TV for me, without the need to waste precious time editing.
ssj100 said:
Why is it unacceptable? You can still choose to take photos in 4:3. Whenever you want to record video, you have to switch it to 16:9 if you don't want the zooming effect (if you've set it at 4:3, then the phone automatically records in 16:9, hence the zoom) - the phone can only record video in 16:9 aspect ratio, so that's by design. The same goes with other flagship phones like the Nexus 6P and iPhone 6.
And by the way, taking photos in 16:9 gives exactly the same quality as 4:3. The only difference is there is less field of view with 16:9, relatively. Personally, I just set the camera at 16:9 by default. If I really require more field of view (rare instances for me), it's not hard to tap the phone twice to select the 4:3 setting. And because 16:9 is default for me, I don't have to manually change it if I want to record video accurately (without the zooming) etc. It suits me nicely, as I often record video. Furthermore, 16:9 photos take up the full screen on the actual phone, laptop, PC and TV for me, without the need to waste precious time editing.
Click to expand...
Click to collapse
I prefer to see higher and lower too on my photo's. If I don't need it, I can remove it, but it's impossible to stitch that later on.
So, I should stick with 4:3, but then I will miss the correct frame for video. If you have kids, you should know that switching quickly is very important. What should be easier than to show two lines at the 19:6 position? When video recordging starts, I wouldn't mind if then that frame blows up.
So, for me the current working is weird, annoying and it seems to make it a bit slower. But, let's stop like this, each and every customer has his own thoughts about this and that's ok.
All good. The camera is just for fun for me. Maximum convenience is the theme here. And that's a "set and forget" 16:9 ratio for everything, and I know exactly what's included in the frame when I'm taking it etc. For my purposes, editing photos is a waste of time. I'd rather spend that time actually interacting with the "kids" etc. But totally agree, whatever makes you happy in the end.

How the Kirin 970 uses Handheld Super Night Mode to Take Better Photos at Night #ad

How the Kirin 970 uses Handheld Super Night Mode to Take Better Photos at Night
When it comes to smartphone photography, the most challenging shots are always going to be night shots. Situations with limited light most often result is grainy unusable photos for devices with weaker cameras. The Kirin 970’s AI chip helps to solve this issue with “Handheld Super Night Mode”.
One way to achieve better night shots is to set your phone on a tripod and let your camera use a longer exposure and higher ISO. This is a bit inconvenient as most people obviously wont be walking around with tripods. To solve this issue, Honor uses the Kirin 970 to add “Handheld Super Night Mode” to their phones. This mode lets you take better night shots without having to setup any equipment.
Handheld Super Night Mode works by using powerful AI algorithms, and the quick processing ability of its Kirin 970. There are several techniques used to enhance your night time photos.
AI Detection of Handheld State
One of the key factors of Handheld Super Night Mode is how the phone uses the AI chipset to detect any hand-held jitter of the phone. To realize accurate and efficient detection, the AI system collected and analyzed tens of thousands of data records reflecting different types of photographers and their camera and tripod usage methods, designing a machine learning logic to understand their habits. As a result of implementing this massive amount of data, the Kirin 970 is able to detect when Handheld super night mode is needed in 0.2 seconds. Using this data, the average users is now able to take better night shots without having to use a tripod.
AI Photometric Measurement
The AI photometric measurement system controls the camera’s light intake. After you tap the shutter button, The AI will automatically set the exposure and number of frames based on the lighting scenario, brightness of the preview image, distribution of light sources, and jitter.
AI Image Stabilization
After all of your frames are captured from your night shot, they are merged into a single image. It is common that surring this process, night shots often turn out blurry. To avoid this, before the synthesizing process takes place, the AI the clearest frames and discards any of the bad ones. The clearest frames are used as the standard for the image, while the other frames that the AI has not discarded are automatically aligned. The AI-powered Kirin 970 chip detects feature points within each frame, matching these points and aligning them to to produce the cleanest image possible.
Image synthesis
The final step in Super Night Mode is image synthesis. For this step, customized algorithms have been computed for the AI system to increase the number of short-exposure frames in bright areas to avoid overexposure and the number of long-exposure frames in dark areas to improve detail retention. Frame differences are detected pixel by pixel. If differences are large, AI determines that alignment failed around the edges and conducts correction and repair to ensure the edge regions are still crisp and sharp enough after synthesis. Noise reduction is performed on multiple frames, thereby improving the image’s signal-to-noise ratio, and achieving a clearer, cleaner, and brighter night shot.
Photos Taken on the Honor 10
These photos were taken on the Honor 10, with the Kirin 970 AI chipset using Super Night Mode.
IMGUR Album
We thank Honor for sponsoring this post. Our sponsors help us pay for the many costs associated with running XDA, including server costs, full time developers, news writers, and much more. While you might see sponsored content (which will always be labeled as such) alongside Portal content, the Portal team is in no way responsible for these posts. Sponsored content, advertising and XDA Depot are managed by a separate team entirely. XDA will never compromise its journalistic integrity by accepting money to write favorably about a company, or alter our opinions or views in any way. Our opinion cannot be bought.

How the Kirin 970 uses AI to Take Better Photos at Night

When it comes to smartphone photography, the most challenging shots are always going to be night shots. Situations with limited light most often result is grainy unusable photos for devices with weaker cameras. The Kirin 970’s AI chip helps to solve this issue with “Handheld Super Night Mode”.
One way to achieve better night shots is to set your phone on a tripod and let your camera use a longer exposure and higher ISO. This is a bit inconvenient as most people obviously wont be walking around with tripods. To solve this issue, Honor uses the Kirin 970 to add “Handheld Super Night Mode” to their phones. This mode lets you take better night shots without having to setup any equipment.
Handheld Super Night Mode works by using powerful AI algorithms, and the quick processing ability of its Kirin 970. There are several techniques used to enhance your night time photos.
AI Detection of Handheld State
One of the key factors of Handheld Super Night Mode is how the phone uses the AI chipset to detect any hand-held jitter of the phone. To realize accurate and efficient detection, the AI system collected and analyzed tens of thousands of data records reflecting different types of photographers and their camera and tripod usage methods, designing a machine learning logic to understand their habits. As a result of implementing this massive amount of data, the Kirin 970 is able to detect when Handheld super night mode is needed in 0.2 seconds. Using this data, the average users is now able to take better night shots without having to use a tripod.
AI Photometric Measurement
The AI photometric measurement system controls the camera’s light intake. After you tap the shutter button, The AI will automatically set the exposure and number of frames based on the lighting scenario, brightness of the preview image, distribution of light sources, and jitter.
AI Image Stabilization
After all of your frames are captured from your night shot, they are merged into a single image. It is common that surring this process, night shots often turn out blurry. To avoid this, before the synthesizing process takes place, the AI the clearest frames and discards any of the bad ones. The clearest frames are used as the standard for the image, while the other frames that the AI has not discarded are automatically aligned. The AI-powered Kirin 970 chip detects feature points within each frame, matching these points and aligning them to to produce the cleanest image possible.
Image synthesis
The final step in Super Night Mode is image synthesis. For this step, customized algorithms have been computed for the AI system to increase the number of short-exposure frames in bright areas to avoid overexposure and the number of long-exposure frames in dark areas to improve detail retention. Frame differences are detected pixel by pixel. If differences are large, AI determines that alignment failed around the edges and conducts correction and repair to ensure the edge regions are still crisp and sharp enough after synthesis. Noise reduction is performed on multiple frames, thereby improving the image’s signal-to-noise ratio, and achieving a clearer, cleaner, and brighter night shot.
Check out photo samples using walking night mode here.

Can an additional camera sensor attached via USB-C be read by apps via current APIs?

ok... so totally crazy idea. Hypothetically speaking, what if there is a way to attach a much larger camera sensor (eg. APS-C size with e-mount) onto a fast SD865 phone (or future) via usb-c? And then would it be possible to have camera apps read data from it via current APIs? obviously there are alot of steps i'm missing here, but the biggest weakness in phone cameras is the sensor and there is simply no physical way to put an APS-C sized one in. The lenses would be humongous.
That said, at SOME times, especially those who are more serious into photography, being able to attach your phone to a big sensor would give you superior gear than anything that exist right now. combining the existing computation techiques with a fast processor WITH a large sensor does not exist. it's one or the other, no one has tried to do both yet (there are some old ones like Samsung Galaxy NX and currently Zeiss, but it doesnt seem like they are going to take advantage of computational tech)
Benefits:
- superior HDR with AI (most DSLRs have multi-stacked HDR but they are not as advanced as Google's)
- potentially AI HDR in video footage (no DSLRs have this , done in post)
- Enhanced artificial bokeh on top of already good bokeh to simulate medium format look
- immediate access to mobile lightroom / sharing direct to sources
- all media creation/library in one source
- utilize superior EIS to have stable footage (again, no DSLR has any good EIS tech. more focus in IBIS and OIS which is beneficial only in photos). ever tried S20 or iphone 11 pro at night? it's a noise party. In this case it would be a clean 4k footage with gimbal like EIS
it's true alot of the above can be done in POST when shooting with large sensor cameras
thoughts?

Question Astrophotography time lapse question

Just wondering if there's any way I can get an astrophotography time lapse greater than 1 second? I would love to have 60 seconds, but I know it would probably take 4 hours or something.
Just wondering if this is possible or there's any third party apps that might be able to do this (take a longer exposure than the 4 minutes that astrophotography takes)?
I don't think it is possible, the astro time-lapse is made up from the images used to and then stacked for the astro image itself so you would end up with shed loads of images as well.
Have you tried just using the normal time-lapse option in the video settings?
Exactly, take a normal night video and then slow it down with editing software.
schmeggy929 said:
Exactly, take a normal night video and then slow it down with editing software.
Click to expand...
Click to collapse
The dude is talking about astrophotography and long exposure shots for a reason. What will a "night video" do good? And timelapse is not slowing down the video. lmao
That is my mistake, I totally read his post wrong.
Thing is the astro time laps is made up of the individual shots taken when Astrophotography mode is active so those individual image have been taken at f1.85, if you just did a normal time lapse using the main lens the video will still be at f1.85 and with a bit of post processing it should work.
The other way around it is to just take a night mode photo every 30 seconds for 2 hours using a timer and a Bluetooth remote.
MrBelter said:
Thing is the astro time laps is made up of the individual shots taken when Astrophotography mode is active so those individual image have been taken at f1.85, if you just did a normal time lapse using the main lens the video will still be at f1.85 and with a bit of post processing it should work.
The other way around it is to just take a night mode photo every 30 seconds for 2 hours using a timer and a Bluetooth remote.
Click to expand...
Click to collapse
You're talking about Aperture that is FIXED and completely irrelevant in this case. It's not like you have a variable aperture on the lens so you can adjust it.
What matters in his case is the shutter speed and the exposure time.
And no, normal timelapse WON'T work because the shutter speed will be low (fast) and the phone will try to compensate by pushing the ISO high. You'll end up with very dark scenes and TONS of noise.
And what makes Astro mode very important is the FRAME STACKING. Frame stacking reduces the overall noise and increases the "quality" of the image.
Deadmau-five said:
Just wondering if there's any way I can get an astrophotography time lapse greater than 1 second? I would love to have 60 seconds, but I know it would probably take 4 hours or something.
Just wondering if this is possible or there's any third party apps that might be able to do this (take a longer exposure than the 4 minutes that astrophotography takes)?
Click to expand...
Click to collapse
Not with stock camera.
You can try MotionCam Pro for that. It has a timelapse option where you can set your exposure time even to 15 seconds.
MotionCam is mainly for RAW video recording, but you can do photos and time-lapses. The output is absolutely GREAT. You're working with a RAW VIDEO basically and the quality is not comparable to ANY other app.
I had one Astro timelapse from it but I can't seem to find it now. It's sh**y weather outside now so can't do even a short one. I could do just a daylight one so you can see what quality I'm talking about here.
Uploaded a screenshot of the viewfinder. As you can see on the SS, you can adjust the ISO and shutter speed (among many other things) and do a timelapse.
This is basically taking RAW shots that you can later post process with various editing software like, Davinci Resolve, Adobe Premiere, Vegas, etc...
What you get is a video quality on the level of a DSLR and BETTER because there is no post-processing involved on the phone, it's basically RAW DNG images taken (sequence) that you can export (render) into a video at your QUALITY choice with YOUR post-processing involved.
Here is one sample I shot at and rendered to 4k60 (no color grading, just stock output).
Keep in mind that this is YOUTUBE, the quality of the original video is FAR better.
JohnTheFarm3r said:
You're talking about Aperture that is FIXED and completely irrelevant in this case. It's not like you have a variable aperture on the lens so you can adjust it.
What matters in his case is the shutter speed and the exposure time.
And no, normal timelapse WON'T work because the shutter speed will be low (fast) and the phone will try to compensate by pushing the ISO high. You'll end up with very dark scenes and TONS of noise.
And what makes Astro mode very important is the FRAME STACKING. Frame stacking reduces the overall noise and increases the "quality" of the image.
Click to expand...
Click to collapse
I know the aperture is fixed that's why i said it should work given the astrophotography mode time lapse is made up from the 16 images taken when the mode is active and not once the images have been stacked in to a single image. Given the way you talk you of all people should appreciate just how fast f1.85 is, not a single one of my Canon L lenses is that fast or even comes anywhere close to it.
The OP has nothing to lose by giving it a go before recommending extra software and shooting raw (it is raw BTW if we are getting picky, it isn't an acronym for anything).
MrBelter said:
I know the aperture is fixed that's why i said it should work given the astrophotography mode time lapse is made up from the 16 images taken when the mode is active and not once the images have been stacked in to a single image. Given the way you talk you of all people should appreciate just how fast f1.85 is, not a single one of my Canon L lenses is that fast or even comes anywhere close to it.
The OP has nothing to lose by giving it a go before recommending extra software and shooting raw (it is raw BTW if we are getting picky, it isn't an acronym for anything).
Click to expand...
Click to collapse
Where did I say ANYTHING against the fixed aperture of F1.85? I just said that since it's fixed, it's not relevant to the "settings" he uses since he CAN'T change the aperture value anyway.
It's not about "losing" anything, it's about the technical part of understanding that your recommendation won't work because it doesn't use long exposure shutter speeds or frame stacking.
By NOT using frame stacking, the noise will be horrible and there is little much you can do with post-processing without killing completely the "details" on the photo by suppressing both luma and chroma noise.
Another thing is that regular timelapse doesn't push long exposures...It's just not meant to be used for "astro", that's all.
Erm ok fella but how do you think this was all done before Google and its wonderful computational photography came along?
My point about the aperture is it is very fast so it being fixed is not irrelevant at all given it is the only chance of this even working, the OP may have tried it at 0.5x or 5x where the apertures are much slower, the OP has absolutely nothing to lose by giving it a go, it might be crap, you might end up with only the brightest objects in the sky, you might end up with a noisy mush and yet it might be good fun who knows.
Sadly there is always one person that comes along and stomps on the parade because they know best though isn't there?
MrBelter said:
Erm ok fella but how do you think this was all done before Google and its wonderful computational photography came along?
My point about the aperture is it is very fast so it being fixed is not irrelevant at all given it is the only chance of this even working, the OP may have tried it at 0.5x or 5x where the apertures are much slower, the OP has absolutely nothing to lose by giving it a go, it might be crap, you might end up with only the brightest objects in the sky, you might end up with a noisy mush and yet it might be good fun who knows.
Sadly there is always one person that comes along and stomps on the parade because they know best though isn't there?
Click to expand...
Click to collapse
It was done in a way that results were not even close to what we have today. Why use "outdated" methods when we have these VERY capable devices?
The app I suggested is great and has exactly what is he looking for.
Your logic of "How did we do this before XY time" is equal to "Let's just ride horses instead of cars because that's how we did it before". lmao

Categories

Resources