Looking for datasets to test a rolling shutter correction algorithm.


Recommended Posts

Hello everyone! I have created a software package that -amongst other purposes- it can be used to compensate for rolling shutter artifacts and provide corrected images. The overall scope is to increase the accuracy on 3D models created with photographs captured by Phantoms or other low cost UAVs and/or cameras. From tests that I have performed so far, the accuracy increases up to 70% for fast flights. I am turning to you now to further test my algorithm. If you have UAV-captured datasets that you can share with me, I would be very grateful. The only restriction is that I need datasets that have GCPs measured with RTK to eliminate this error source.

Thank you very much in advance!

Link to post
Share on other sites
  • 3 months later...

At high speed the UAV is likely to be pointing pretty much along the line of flight and the rolling shutter distortion results in the bottom of the image being collected 30ms after the top.

My simple algorithm is to resize (reduce) the image vertically by the number of pixels represented by    (distance travelled in 30ms)/(Pixel size at flight altitude above ground)

example

10 metres per second = 30cm

60 metres altitude = Pixel size of 2.6cm

30/2.6 = 11.5 (11) pixel reduction in height.

The timing of the photo is moved forward by 15ms (half the rolling shutter delay)

Is your software doing anything more complicated?

Of course if you are stationary the rolling shutter is not needed to be corrected for.

 

Link to post
Share on other sites

I tried your approach sometime ago and the results were way off. The reason is that when you apply an affine transformation to the image, you change the pixel size, whereas the correct way to go is to change pixel location. I guess you haven't checked your results or else you would have seen that too.

I have actually created a software package around (UAV) photogrammetry. You can find more about it here.

My rolling shutter algorithm is almost fully automated. The only info you need to pass to it is the rolling shutter duration, i.e. the 30ms duration in your case. Also, I have taken into account more complex motions, like diagonal motion or rotational motion, where the rolling shutter correction cannot be applied only y-axis-wise. And all this is motion parameters you mention, are estimated with high accuracy without the user doing nothing. Plus, the GNSS modules onboard UAVs have an accuracy of about 10m. This means that, in essence, at any time you know the  location and height of the platform with an accuracy of +/-10m (and worse for the elevation) and as a consequence, you perform a correction that is otherwise very precise, with very inprecise data.

Link to post
Share on other sites

If you change the image size without modifying the camera parameters I would agree with you, another option is to vary the x and Y pixel pitch, depends what the  processing software can handle. My reults have been adequate, but not if the drone is crabbing at all. 

It does sound like you have a far more comprehensive approach, visited the web page looks impressive. So yes I will dig out a dataset or two.

Depending on mission I carry an Emlid Reach kinematic GPS on my Phantom 4 (with rolling shutter) and have developed a method of linking the photo timings to the PPK data. 

I have one recent mission where I also stopped at each photo location.

I am fortunate to have a tightly controlled archery range as my proving ground, I can pick and choose reference points or GCP's. 

 

49503368de445200b0acf3f6b2ab672110dad8d9

 

 

Link to post
Share on other sites
2 hours ago, Dave Pitman said:

Have either of you tested the P4P with mechanical shutter?  Curious what you found.

 

I have compared my corrected models to the models create with a Matrice 600. The error analyses show that the corrected models are better than the Matrice models, but the orthos by the Matrice models are more sharp.

Link to post
Share on other sites
6 hours ago, Spatial Analytics said:

If you change the image size without modifying the camera parameters I would agree with you, another option is to vary the x and Y pixel pitch, depends what the  processing software can handle. My reults have been adequate, but not if the drone is crabbing at all. 

It does sound like you have a far more comprehensive approach, visited the web page looks impressive. So yes I will dig out a dataset or two.

Depending on mission I carry an Emlid Reach kinematic GPS on my Phantom 4 (with rolling shutter) and have developed a method of linking the photo timings to the PPK data. 

I have one recent mission where I also stopped at each photo location.

I am fortunate to have a tightly controlled archery range as my proving ground, I can pick and choose reference points or GCP's. 

 

49503368de445200b0acf3f6b2ab672110dad8d9

 

 

If you want to test my correction upload the images you want to a dropbox and send me the link with a pm. It would be better if you had a dataset with GCPs laid on the ground since they can be identified and placed more accurately than landmarks. Also, if you had imagery taken with fast flight speed (e.g. 10 m/s), the accuracy increment will be bigger. I will correct the images, send them back to you and you can run the models with your preferred settings.

Link to post
Share on other sites
  • 2 weeks later...
3 minutes ago, Av8Chuck said:

Hi Nikos,

here's a link to a recent survey we shot: https://goo.gl/gsryQR  

I was going to ask if this would work but it doesn't have GCP's and was not RTK.  I'm interested in your experiments though.  

Good luck and please keep us posted.

It's exactly the way you say it. It is a very detailed correction, unlike distortion that can be introduced in significant amounts in an image, and the way to realize its effect is by eliminating all other error sources. The most significant of which is the positional error introduced by GNSS modules onboard commercial UAVs like the DJI Phantom series. You are welcome to send me your images and I could correct them, but you would see absolutely no difference in your models. I have seen accuracy improvements from a few millimeters -which could also be due to better placement of GCPs on images-, to about 10cm. I believe you get errors in the range of 1m-2.5m, so you understand why it doesn't really make sense. Regardless, if you want to see the results out of curiosity, you are welcome to send me anything you want.

Link to post
Share on other sites

Archived

This topic is now archived and is closed to further replies.