Jupiter has just gone passed opposition which made it perfect for imaging. Also, from my backyard, it was in prime position between the trees, above my neighbours’ house. With so much deep sky imaging, it is easy to overlook the bodies in our solar system, and Jupiter is an ideal candidate for even beginners to have a go at.
One of the challenges to imaging Jupiter, particularly at the minute, is that it is relatively low on the southern horizon in Britain (just above 30 degrees). Combined with it rising at dusk, this means poor seeing caused by temperature currents affecting the atmosphere. Imaging through these disturbances means that the target will appear to go in and out of focus rapidly. This is further exasperated by the magnification of your scope. One way around this is to use ‘lucky imaging’, a system whereby you shoot high FPS (frames per second) video that you then stack using software to reduce noise, as well as reject all but the best frames.
Due to it’s rotation, I limit my videos to 90 seconds at a time so I am typically gathering 3,000 – 5,000 frames per video. With several such videos taken in rapid succession, I typically process around 20,000 frames to stack into my overall image.
Before moving onto Jupiter, I focus using a bahtinov mask on a nearby star (Arcturus works well for me). Once I am focused, then I know that Jupiter will also be well focused, even if it is slipping in and out of apparent focus due to the seeing. Yes, people will argue that stars are hugely further away than Jupiter and so technically they are not in the same focus plane however the actual differences on my scope would be measured in nanometres and so is irrelevant.
I image using Sharpcap as it enables me to gather .ser videos rapidly as well as name them how I wish. This step is important as I want the filenames to contain the universal timestamp of when they were captured as I need this information for later on in WinJupos.
I tend to drive the camera gain quite high to allow the FPS to be as high as possible. This will create noise so it is a balancing act, and also this noise can be mitigated to some degree in stacking as well as post processing.
I ended up with 6 x 90s videos which I then loaded into PIPP for alignment. PIPP allows me to crop the video down to the size needed as well as do some basic pre-processing such as weight each frame by quality. I then loaded the saved videos into Autostakkert!3 for stacking. I know this might look like duplication as I also use Registrax later in the process, however I prefer using Autostakkert!3 as it is 64bit and can use all my machine’s cores.
In Autostakkert!3, I stacked each file, rejecting 90% of the lowest quality frames. The remaining 10% were then automatically stacked and saved as a TIFF file. From here, I loaded it into Registrax for processing. I first of all aligned the RGB as my planetary camera has a RGGB matrix so the videos tend to be green before processing. Once aligned then I moved to wavelet sharpening. This allows details to be drawn out on different wavelet (resolution) layers, along with noise reduction of the same layers, This is a real ‘trial and error’ for me, however this tutorial does a great job at introducing the concept of wavelet sharpening.
Finally, I save the image down and move to Photoshop for saturation and final noise reduction. And the image is complete.
Altair Astro Hypercan 178c
Skywatcher EQ6-r Pro
Skywatcher 130pds using a 2x barlow