Watching the moon without the blurry view? Answered
We put our biggest telescopes on the highest mountians and as far away from civilisation as possible.
Reason behind is light pollution and atmospheric distortion.
My days of playing with a big telescope in the garden every night are long over, life moves on once you need a job to make a living and something like a family.
A while ago I had a look through a friend's telescope at night again when I was asked to give him a hand with the connection to Wifi, computer, printer and so on.
Just imagine a big telescope with built in camera and Wifi.
Checking the claimed to be astonishingly clear pictures of the moon we took I started to wonder...
Being able to get a "close up" so to say of a little crater on the moon is nice.
But I missed the clearity.
If you take a picture with your phone or camera with perfect setting then you get the best possible picture.
Not so much for the moon.
Those who ever used a telescopic lens on the camera with some haze or fog will know how things in the atmosphere have certain effects.
The moon is basically in a boring grey so BW cameras would be sufficient and also eliminate all the bad side effects of colored imagining chips.
Other reasons than our atmosphere that can prevent a clear picture...
The earth rotates and so does the moon, sadly this makes tracking hard over such a distance.
Constantly changing brightness levels make the right exposure a nightmare, especially if you want to combine images to get a combined one without the shadows.
So why not use what we already have to eliminate or compensate most of the issues we have?
The hardware required for the tracking of stellar objects and computer control should not be an issue anymore.
Once you position is syncronised with the current position of the moon the computer would be able to calculate and track with extreme precision.
Using highly sensitive BW cameras might be a short term fix, better would be a chip and lens system only sensitive to the visible wavelengths with the least atmospheric distortions.
The last would be to actually use an AI to track every pixel as it is taken and to compare it with previous pixels taken in that same spot to check for brightness matches and wavelenght.
A simplyfied version of this is still in use for the restauration of old films from the analog days.
The moving film meant that the pictures are not always perfectly in sync in terms of their position for the single images.
If digitised like this the result would be a quite blurry film.
An algorithm checks for the usual up and down differences and matches the positions so in reference to the selected frame rate it results in the sharpest possible image.
For the moon there is always a majority of less distortion giving us the impression of the image we see.
The key is have an algorithm that can detect and filter out those frames or if powerful enough pixels with the most distortion.
Quite similar to the steady function on your cameras and phones these days, at least for the digital part.
Just in reverse so to say.
With perfect and smooth tracking a frame by frame collection of the area you want an image of would result in a much clearer and detailed image.
A bit like HDR works now.
You take a lot of images and combine them to get the best possible details on all light level and all colors.
We only need BW though...
How would an algorithm an an AI be able to make an image clearer and more detailed?
Currently we already compensate for camera movements, brightness levels, focus, colors and much more through the AI of our phones.
Where someone with a DSLR or the good old SLR and real film needs to find the perfect setting for a shot in the amount of time the shot is available we just click a button on the screen and the AI does the rest for us.
Sure, instead of three or four pictures taken as quickly as your phone can do our MOON-HDR algorithm might need a few thausand pictures to create one good image and several hours to do so....
But hey, it is up there every night and day, we can perfectly track it and automate the process over days or weeks if we want to capture a full cycle.
If it that easy then certainly the big telescopes on those mountains would make use of it!!
Well, they actually do, but for the sole purpose of deep space exploration.
The movements of earth and in space and referenced to the possible errors of the hardware...
We only need to care about pixels moving in a fixed image...
The algorithm in the simple form works on a frame by frame base, the proper one on a direct pixel one.
Either way the collected images will create piles where all checks match.
More piles means more and more areas to overlay like in a stitching process.
More overlays mean more "filter" used to check the original images against.
The amount of light hitting the moon is pretty predictable and easy to factor in for an AI.
The amount of light loss by the atmosphere at the time the image was taken could be factored in by the means of external atmospheric sensors.
In the most basic form a "ping" with a laser onto a satellite on a fixed position between telescope and moon, or close by.
The the google AI tries to find more and more links to make more and more reference about our internet use our algorithm collects pixel informations for a fixed position in a digital image.
Quite simple once you think about it, isn't it? ;)
It is just a matter of time and amount of data being collected.
What was already possible in the days of converting analog movies into the digital world can't be too hard to do with an object that we can literally nail to the wall in terms of images....
So why isn't anyone doing it already?
In therory anyone could just hire some time on a quantum computer or old style super computer and use their own algorithms to sort the images.
In reality that would cost you a few millions quickly.
Graphics card are already well in use to do other things than bringing an image to a monitor.
They used a calculating work horses - the bitcoin mining is a prime example here.
Medical and scientific use another.
Her it is quite ommon that clusters of powerful graphics cards do nothing than to run simulations.
Already a form of AI, although quite simple.
There is no huge market for telescopes, let alone big and powerful ones for the sole purpose of watching moon.
Why bother, it is up there every night but all the stars and planets up there need to watched first....
You will have a hard time finding a manual telescope with a simple way to follow the moon through the night.
Watching the andromeda nebula on the other hand is quite easy...
Worse still for the cameras.
What is out there for use with telescopes or integrated into one is optimised to get the most light out of everything that is quite dim.
Even your basic telescope usually has a moon filter already because the thing is so damn bright.
Professional BW cameras would need expensive and custom made adapters to fit a telescope's optics.
Last but not least is the tracking mechincs that are simply not designed to follow the moon, neither in its movement, nor the speed it is moving.
But as said the tracking could be fixed with a custom version designed for watching the moon and a little microcontroller to drive very smoth running motors and gears.
Is a fancy smartphone the way out for the images?
Not really unless you want to constanly fight to get and keep manual control about the settings for the BW camera on the back.
And they are not really designed to be used like that for hours every night.
The only way out is size and speed.
What would the perfect telscope/camera combination look like?
Atmospheric interference means the light is scattered.
And ideally that should be the only thing the AI has to worry about later on.
Sadly, when we take a picture we need a certain exposure time to capture enough light.
This mean mean collect something you compare to a fast moving object.
You just cant get a clear image of spinning fan blades easy these days...
Shorter exposure times are only possible with enough light.
There are two way we use to get enough light for a given exposure time.
Add some more light or use a bigger lens.
Our standard telescopes are designed for eye pieces for, well, our eyes.
Imaginea huge sized 40 megapixel sensor perfectly positioned in a telescope.
Using only optics to compensate image distortion and to match the incoming image size to the sensor size.
No telescope optics and camera optincs fighting for the right position, just perfectly matched for the purpose of takin images from the moon...
If said sensor would be highly sensible most people would cry out and say the moon is too bright - but we want to use every bit of light to lowe our exposure times ;)
Imagine a high speed camera only taking BW picture of the moon.
At a speed of over 2000 frames per second....
Why bother with overcomplicated tracking if you can take millions of images every night.
You will only bother about storage and calculating problems after a single night already LOL
The amount of pixels of near identical properties in the same spot would soon be much higher than the amount of blurry pixels with different values.
And the AI would still be able to match those blurry ones with frames from other images.
But who really wants to see footprints and left over parts on the moon?
We have been there, we have seen it...
And if go up again we will have live coverage anyway from up close, so there really is no need for anyone to get a clear image of the moon from down here, or is there?