An ongoing compilation of my notes and the most relevant sites that were very helpful to me and perhaps will also be to you….as an owner of of the Skywatcher Star Adventurer.
The Star Adventurer has a maximum payload of 5kg or 11 lbs this is quite high for the size of the mount. Note that the counterweight is not considered to be part of the payload. As a rule of thumb, though, in astrophotography, it is considered a good practice to charge the mount to about 60-70% of its maximum payload capacity which means 6.5 to maximum of 7.7 lbs to preserve the motor life.
Payload – Refers to the instrument payload
Load Capacity – Refers to combined instrument load and counterweights
All Up Weight – Refers to combined instrument load and counterweights
The payload weight is the weight carried by the mount’s saddle where the telescope is attached. The counterweights are not considered in payload calculations.
If you load the mount to the max it means wear on the gears and motor. Load capacity is a guideline, the actual load depends on your scope shape as well as many other factors. Longer scopes such as refractors and Newt will put greater strain on a mount than a short and compact SCT. You also need to consider a scope’s focal length and how much it would magnify mount errors.
Furthermore, low end manufacturers tends to over estimate their mount capacity (and give rating for visual use), while high end manufacturers tends to be more conservative and rate their mount for imaging. If you are imaging, the conventional wisdom is you should not load your Skywatcher, Celestron, Meade or Losmandy mounts beyond 50% of the stated capacity.
The counterweight isn’t counted towards the load capacity of the mount….but it is important in the total weight that the gears and motor have to turn and the longevity of the mount.
It isn’t just about the weight. Two factors to consider are the total weight (excluding the counterweight) and also where it is located e.g. Mass * Radius * Radius.
It is easier for the motor to guide/rotate if the weight is closer to the point of rotation. So given a choice of weights hanging off the end and more closer to the point of rotation the latter is preferable.
Hanging a guide scope outside of a main scope is an example of adding weight in the wrong place – problem is that it’s not easy to do something better.
Initial Polar Alignment (with equipment on mount):
RA AXIS ALIGNMENT COULD ALSO BE DONE IN THE DAYLIGHT ALSO BY USING A WALL OR VERTICAL POLE BUT IT SEEMS REASONABLE TO PERFORM THIS PROCEDURE VIA THE FOLLOWING METHOD USING POLARIS DIRECTLY WHICH WOULD PROBABLY BE MORE PRECISE.
Once the Ra alignment has been done the image below shows how the polar scope Polaris position is adjusted corresponding to a reading in Polar Scope Align Pro
An Easy Way To Correct The Polar Alignment After Framed Up Target:
This video is very important as it shows how to fine tune the precise polar alignment and to check it once you have done your framing or during imaging…..what I will do is attach the cel phone to the polar scope to set up polar alignment and check it during imaging.
Find the balance point of the scope lengthwise and mount it in the dovetail accordingly. Then while holding everything loosen the RA axis clutch and position the RA bracket horizontal to the mount and set the counterweight accordingly so the entire rig balances.
Balance first in Ra with equipment fully on and lens extended to mm used for your imaging. Now Open the RA and Dec screws and aim in region of sky of object to be imaged. Tighten the Dec screw. Notice which way tilting and slide the dove tail plate accordingly to get a balance….test it by rotating the equipment with the Ra screw lose…should not move in any direction you place the equipment along the RA axis.
Gimp, Photoshop, StarTools, Raw Therapee, 32 bit processing….
also links to excellent and key tutorials to follow as you process your images
🔭😊⭐️This is a compilation of ongoing notes that I am creating and continually tweaking on the topic of image processing… as I continually follow the arduous path in learning this difficult but wonderful skill. Associated and included are key sites that I have found particularly useful.
Hope this will serve equally useful to you, my fellow hobbyists…
I have been using the older version of Luminar 2018 for some time and actually go to it to tweak very specifically regions for detail, luminosity, saturation contrast…essentially all these attributes and so much more, applied to specific regions by a size, softness and opacity adjusted brush….so you have the ability to do global or local adjustments using a brush and with or without an option to do so in layers as well.
I prefer to work initially in 32 bit image processing…so currently I use Gimp, RawTherapee. Working in 32 bit allows more detail to be gotten when you do your initial stretches and hence more dynamic range in the final result….
I take the post processing as far as I can with these software in 32 bit ….then go to a combination of other software for final tweaking, like Photoshop, Siril and Luminar which save in 16 bit. I have found that downsizing the image from 32 bit to 16 bit actually reduces the noise in the image not to mention renders a better dynamic range even after downgrading to 16 bit. In fact it is also a way to spend less imaging time and still be able to obtain a desirable result.
I would love to know your approach to image processing…😊
Here he does not deal with any star color…just size with respect to masking.
……probably best to keep the gain low and use short exposures and stack lots of frames. Then, if you want to go really deep just add some frames with longer exposures and create an HDR. However, you can get perfectly adequate images without going to HDR.
I too struggle with this,but I have found that shorter subs help. I use 150s subs for the RGB background. After you’ve blurred the stars to get the colour, try a little colour saturation (in your program of choice) and then use a sharpening tool to make them nice and solid again. I find if I do that, followed by a size reduction step or two for the larger stars, the stars star to look a bit more colourful.
Use Noels ‘increase star colour’ along with decent gaussian blur to colour up but soften the stars in bottom layer.
Set the top layer to ‘luminosity’ to restore sharpness.
The thing that made the difference for me was making a mask, just allowing work on the bigger stars, then switching to Lab mode and using the saturation slider a little… I thinks its come out pretty good… compared to my first go. It does something odd to the yellow stars… I got a few bicolour yellow blue stars that had to be erased! I have a way that works
*❤️Read that should make a copy or original image and from that copy first work on stars in the image to get color and then take out the stars as a layer.
Maybe during the process of stretching the image could highlight select the stars and copy and past them into a new layer…that is make them visible in that layer….turn those layers off until needed…could then recombine the stars into one star only layer to retain the size and star color or use opacity changes to bring out the stars. Also can use these star layers and merge and make a star mask to further process the main image while not affecting the size of the stars…that way when the star layer or layers are revisible in the image there will not be halos or star color changes to affect the star layer or layers contributing to the overall final image.
How to add back to the image the removed stars:
Process the No Stars image in the usual way, then process the original image so that just the stars show and not too much background nebulosity. Then copy this image and paste it in a new layer over the top of the starless image and set the blend mode to either Lighten or Screen…..whichever one works best. (Not sure however how the Screen effect in PS or similar software would affect the colour.)
This one applied to Gimp….make a duplicate top layer turned off in case get fringing so you can increase a Gaussian blur which the other guy in Gimp does not perform.
This technique is good to follow and parallel in Gimp for diffuse nebula RGB or other colour images to preserve star colours and size while stretching to get detail in nebula. But does not talk of manipulation of colours.
This video over all would be the main one to follow covering all aspects of regions to control in post processing in an image.
This video shows how to create select stars only to increase star saturation. Also by using both a star mask and using the selective and inverted mask of the more compact nebula or galaxy can work on the main regions of background or nebula or galaxy separately.
Note that could also use the Gimp Quick Mask technique to select or invert regions to specifically process but the stars would be per this video.
❤️Gimp does not have arcs in stretch but if you do two arc sin curves with a levels in between in Gimp as in this tutorial but note the layer mode has to be changed….see tutorial. This will allow good stretch without loss of color.
The alignment tool works very well…note that in the astronomy plugins in Gimp there is an alignment script which might be good to try.
This alignment procedure will be necessary as I have not been able to align as yet different exposures in DSS….so then after stacking the images separately in DSS and separately post processing follow this step below to align the different exposure images in Gimp…..
Taking two images one faster exposure to reveal detail that would otherwise be blown out….These two following tutorials are excellent and through the use of layers and associated adjustments allow the creation of more dynamic depth in the image. Done in PS as in tutorial below could also be done in Gimp where the added advantage is being able to work in 32 bit….make sure the images are aligned before!
Use Selective color in different ways to change colours and make masks
❤️❤️❤️When your regions are difficult to isolate can choose selective color see specifically 11.06 point in this video to make a mask according to the selective colours and change the color or any other characteristic of that region.
Nik Collection…see exe files on Desktop (portable computer)
The ColorEfex.exe app in the Nik folder on my desktop is the best of all of the Nik apps and works on 32bit files….
HDR image is well done by the Nik collection…choose bracketing imaging on DSLR or take a series of different exposures with ZWO.
(The images would have to be aligned first and can do this in Siril)
The HDR program in this collection would probably be good for getting rid of the reflective regions on the moon….just take different exposure sets and stack them individually in DSS.
….❤️but this procedure will not let you separately adjust and tweak the different exposures in an ongoing way before they are merged….so better to align the different exposures in say Gimp and tweak the individual exposure layers there. This method combined with imaging with a red
and/or an IR pass filter will probably work very well.
Luminosity masks are basically layer masks that are built around specific tones in an image. They are derived from the image data itself, and focused on a specific range of tonal values. … Being able to selectively modify an image based on particular tonal regions can be a very powerful method of editing.
Note there is a luminosity mask creation script in Gimp. This can be used the same as selective colour in Gimp for processing very specific regions otherwise hard to do than by the lasso or tool selection….also note the luminosity masks that are automatically created in the script can be further changed by curves and levels.
Try using the luminosity masks created automatically with script in Gimp to perform precise and selective work on image….but will need a regular mask to protect specific regions or shapes in image.
Creating the Masks. To begin creating the masks we will need to first get a luminosity representation of the image. This is easily achieved by duplicating the base layer, and desaturating it using Luminosity as the conversion option.
The Quick Mask
Can use this to select or protect regions or to paste these regions on other layers….good for galaxy images and maybe star clusters like m13. Need to feather these regions or is it automatically done?
*❤️Could probably use this to inverse the galaxy mask selection to select the back ground then use either desaturate or levels to darken the background or the eye dropper to set black levels…could also do a Gaussian Blur to regulate the background night sky from pollution or dust.
A method that can select the background for processing to darker and no dust or light pollution: this involves drawing the lasso around the object….must do a feather of say 150 or less pixels so regions will not look harsh.
Buying a telescope does not stop there…it is essential to know how to star test and to collimate your instrument in order to keep it operating at peek performance. A simple star test can reveal not only improper alignment but optical defects as well.
Every amateur astronomer should get familiar with the star test images. This fundamental test as a tool, will allow you to diagnose optical quality, the adequacy of the alignment and determine the appropriate night for visual or imaging pursuits.
Don’t assume that a new telescope is perfectly collimated. With temperature changes and transport, the alignment can also become less optimal with time, significantly affecting the quality of both the visual and image.
Factors such as tube currents and atmospheric turbulence as well as pinched optics will affect the star test pattern but do not indicate faulty optics. Flaws in the optics will often show themselves as a combination of causes.
The Star Test
Key points to remember before starting the star test
Choose a location from the city and a night of low atmospheric turbulence. If you look up and see twinkling stars…not good for star testing.
The cool down time for a telescope when the tube currents are minimal will vary…for the average size telescope, about 3 hours.
Star testing should be done using a Whatten number 58 (dark green filter) or a Whatten Number 15 (dark yellow) filter. (An explanation for the use of filters can be found in the literature sited at the end of this article.)
Do not use a Barlow or diagonal… additional glass added to the light path increases the likelihood of induced aberrations.
Use a well corrected eyepiece (Nagler, Radian, Plossl or Ortho)
Use an eyepiece that gives at least 50x magnification per inch of aperture of the telescope.
Choose a relatively bright star such as Vega, Capella, Deneb on which to perform the star test.
Make sure the star is centred in the eyepiece to avoid possible effects of aberration in your telescope.
The Star Test Method
Focus and center the star in the eyepiece.
At focus, excellent optics will be indicated by a bright central, round, pinpoint light called the airy disc which is surrounded by 2 or 3 round, concentric rings that fade progressively to the outer edge. The inner most ring should be about 7percent brightness of the airy disc, the next ring out about 3 percent. The rings at each level should have even intensity.
***Note: Atmospheric turbulence can render this pattern less than ideal…in this case you may see the rings as described but there will be gaps of less intensity.
Now compare inside and outside focus at the level of about 5 or 6 rings. As you are star testing with one of the mentioned filters the results will be more easily interpretable than in white light.
(For instance testing an Apo refractor in white light you would find that one side of focus looks more softer and less defined than the other side.)
Testing with the filter, the inside and outside focus should appear the same…round, concentric rings, evenly distributed in intensity, texture and shape, comparing particularly the outside ring of both sides of focus which ideally should be identical.
***Note: A slight difference between inside and outside focus still implies good optics.
If your scope is well collimated, the light rings in the defocused star image will be symmetrical and collapse down concentrically as focus is approached.
Racking out to 10 rings, the round, concentric rings should show evenness of brightness. Bright zones are indicative of poor optics … as in mass produced telescopes.
At this point, test for similarity with the inside and outside focus at the 1 and 2 ring levels. Essentially in this case the aberrations are being compressed down to this level.
***Note: Seldom does a telescope pass the first ring level test for optic qualit
One night of star testing does not render a dependable assessment of collimation or optic quality.
There are no perfect optics. The star test can be subjective and does require a certain amount of practice and experience.
The interpretion of the star test must take into account the expected quality of the optics … at the price level of the telescope. If you are questioning the quality of the optics in this regard, contact your dealer and also compare your results on forums, with owners of the same telescope.
You may familiarize yourself with the characteristics and the associated images in the literature of proper collimation and optical quality:
Recently, I asked a question on my Facebook Group, Astronomy Notebook, about the creation in post processing of the true representative colors of an object. The following is what I have come to understand from the answers I have received and have subsequently read on the subject….
An imaged object can vary in shape and detail depending on the filters used…..Essentially I guess, it all boils down to which filters are being assigned to the different channels and for what purpose…purpose is the keyword….different filters bring out different details in the object. For this reason, the resultant variations in image color are no less representative of the true nature of the object.
The overwhelming natural color in space is green and so I guess post processing to be of more interest visually is legitimate to compensate…also because there are different wavelengths in the color green spectrum these variances in degree or shades of color can fortunately be translated by grey-scale or by filter imaging to bring out representative chemical variances in the object and the different inherent details.
In addition, astrophotography as in any art has its artistic side where one can add the impressions of its creator to the mix…a rather complicated affair.
Here is a good article which I have read on the subject…maybe you could suggest more?
Photographing The Milky Way does not require specialized equipment but does require some planning. This post will discuss imaging of The Milky Way using the DSLR. Camera sensors are becoming more sensitive these days and smartphones are demonstrating their abilities in astrophotography as well … a topic for a future post.
What Is The Milky Way
The Milky Way Galaxy is home to 400 billion stars. The Milky Way was once seen all over the world but today is observed better in locations farther away from pollution and city lights. To the naked eye it presents itself as a misty cloud speckled with dark patches due to interstellar dust blocking light from beyond. We are viewing the galaxy edge on and see only the spiral arm wherein Earth is located. Depending on the time of year and our location on Earth the bright center of the Galaxy is visible near the constellation Sagittarius. In the Northern Milky Way just below Deneb a star found in the constellation Cygnus is the brightest part, the North American Nebula…a rewarding view with binoculars…see also The Best Milky Way Targets for Stargazers.
If we could follow the Milky Way below the horizon it would be seen as a complete ring encircling the Earth.(The Milky Way 360 degree image : Popular Science )
Best Time To Photograph The Milky Way
In the Northern Hemisphere, look towards the southern skies to see the galactic core of The Milky Way. The core will start to be visible due southeast (Spring), due south (Summer), or southwest (Fall). The core is visible from March to October. But the best time for viewing it is from late April to late July, because the galactic center is visible for longer during the night. The months of June and July are the best.
In the Southern Hemisphere, look towards the southern skies to see the galactic core. In this case, the core will start to be visible due southwest (Spring) or southeast (Fall and Winter). The core is visible from February to October, being in the middle of the winter, June and July, when the core is most visible.
The new moon with clear skies would be the best time to image The Milky Way. Ideally choose a night within 4 days before or after the new moon. In general the clearest skies occur following the passing of a cold front in the weather. In North America, The Dark Sky Finder and the Clear Sky Charts will help you choose an area away from city lights and check the astronomer’s weather forecast. Elsewhere in the world use the Dark Site Finder, The World Atlas Of Artificial Night Sky Brightness and the Blue Marble Navigator.
Plan Your Photography Session
Programs such as Stellarium, SkySafari , Ephemeris, PhotoPills or PlanIt help to plan the time of shooting with respect to the angle and elevation of the Milky Way. That means using these apps allow you to plan the time, location and whether you want the Milky Way to be vertical, diagonal or horizontal in the sky. As far as composing the image is concerned and according to your taste, it may be favorable to include a little of the horizon… for this reason going to the chosen site before nightfall will allow you to plan your imaging with respect to the landscape.
Since the band of The Milky Way spans such a wide distance across the sky, the best approach in order to capture the full band and in relatively greater detail then a single or stacked wide field image would be to take a series of single images for each frame and to use software to stitch the images together making a panorama. Stacking multiple images for each frame in either approach would create a more detailed final image. This process would entail additional overall exposure time and tracking would be required to offset star trailing however.
The Choice Of DSLR And Lens
The light gathering ability of a camera does not directly depend on whether it has a full frame or crop sensor but on the focal length of the lens used and its aperture diameter. The full frame camera sensor has a bigger potential light catching ability but the amount of light actually getting to the sensor varies with the choice of lens depending on its focal length and aperture diameter. The reason why full frame sensor cameras are useful is that they give one the choice touse a lens of longer focal length and hence larger light collection capacity than that allowed with crop sensor cameras.
For any given camera, the focal length of lens and its aperture diameter will dictate the allowed exposure with regard to star trailing. The resultant image from a 15mm f/2.8 lens having an aperture diameter of 5.4 (15 divided by 2.8) will gather less light and be more noisy than with lenses of larger aperture such a 24 mm f/1.4 or 35 mm at 1.4, for example…but a with a smaller field of view which can be amended by creating a series of overlapping images stitched together as a mosaic…a panorama. The larger the lens focal length…the more images needed to shoot the mosaic and the greater the detail of the final image.
The larger the pixel size of a camera, the greater the resultant noise due to dark current… not significant for short exposure such as imaging the Milky Way on a camera and tripod, but definitely evident in long exposure deep space astrophotography.
The DSLR Setup
Use a tripod which has a bubble level so you can make sure the camera is shooting from a level position. The tripod needs to have a head with a rotating base.
Mount the camera vertically to get as much of the lanscape and sky as possible in each image.
Choose a lens between 14 to 50 mm in focal length and ideally a very wide aperture such as f/1.4 or f/2.8. However the lens that comes with the DSLR, usually the 18-55mm f/3.5-5.6 is quite adequate with which to begin.
Cover the view finder to prevent stray light from entering the camera.
Set the camera to manual mode.
For most DSLRs the image stabilization has to be turned off.
Set the white balance to 3800K temperature or set one of the settings your camera has, such as daylight. This is not required if shooting in raw but does help when viewing the LCD monitor.
An intervalometer remote timer is preferable.
Shoot in RAW as it will give better noise reduction and shadow boosting in post-processing.
Turn off long exposure noise reduction as this will add to the exposure time and make stitching the resultant images more difficult due to possible star trailing. Instead to adjust for noise take 10 -30 dark frames for every shutter/iso combination used.,.to be applied to the light frames in post-processing.
Note that if you are shooting in RAW (unlike JPEG) the white balance does not need to be set but will make viewing on your camera LCD screen much better, especially when checking out the histograms on your test shots.
Before the shot, focus the camera using the LCD view finder by zooming in on a bright star. Better still is the use of a Bahtinov mask. For a full frame sensor camera use the “Rule of 500” to find the time of exposure (500 divided by the lens focal length = exposure in seconds). If your camera has a crop sensor find the exposure time by the following formula: 500 divided by (the lens focal length x crop factor) = exposure in seconds.
Perform test shots on the brightest and darkest regions. Use the highest native iso that does not cause any blowouts or over-exposure of the image…check the histogram of the test images. A good starting iso with which to perform test shots would be 3200.
The Panorama Method
Determine the “no parallax point” (use to be called the nodal point) for the camera focal length you will be using for your panorama. The parallax point is the point where the light crosses over before it hits the sensor. This point will differ depending on the lens and its focal length. It is around this point that the camera should be rotated when imaging for a panorama because it will allow each consecutive image to be taken from the same point of view facilitating the accurate stitching together of the images. Finding the setting of the “no parallax point” is accomplished by using an L bracket and a rail attached to the tripod as in the following U tube video tutorial: How to determine lens nodal point for panoramic photography) Setting up the “no parallax point” is not required for the use of very short focal length lenses or in photography lacking foreground. If landscape is to be included in the panorama setting the “no parallax point” is important but if there is only sky in the image, it is not a factor.
When taking the shots, start to the left and bottom one frame more than you will think you will need for the final image and to the right and top in this manner as well… because almost certainly you will want room to crop the image in post processing.
Each frame should have between 30 to 50 percent overlap with the previous image so as to facilitate stitching later (some photographers say that 50% is really too much). A Tutorial: Stacked Astrophotography Panoramas from lonelyspeck.com shows how…and what is learned in this tutorial can be applied generally with the software of your choice. A panorama of multiple stacked images will require a tracking mount to prevent star trails and would no longer be a simple tripod affair, unless the lens would be of short focal length (14 – 18 mm) and the images limited to about 4 for each frame.
Stacking, Stitching And Further Post-Processing
Pixel Fixer is a great program for this if it supports your camera model because it can work on RAW files. Other programs like StarStaX can also use dark frames as TIFFs. More dark frames make for better analyzing, but not every program can do this.