[ExI] Google AI low-resolution photo enhancement

spike at rainier66.com spike at rainier66.com
Fri Sep 10 13:09:33 UTC 2021

...> On Behalf Of Bill Hibbard via extropy-chat

Subject: Re: [ExI] Google AI low-resolution photo enhancement

>...I recall seeing this applied years ago to astronomy images, based on
Bayesian logic. Given a low resolution image low, find the high resolution
image hi that maximizes

   p(hi) * p(low | hi)

>...Works well when hi consists of a lot of point sources like stars. When
used for crime, defining p(hi) is a great opportunity for bias. Like,
probably a male face since most criminals are men.

Ja, and keep in mind there are algorithms that can give you information you
wouldn't have thought possible.

For instance... consider the image of a person which is at such a distance
that your camera measures the perp in about ten pixels height and about two
pixels wide.  Notice that nearly everyone, if we restrict the set to only
men and testiculated women, falls within about a 10% range in height: we
fellers are nearly all the same height to within about 10%.  

So the 10 pixel image is Perfectly useless ja?  No.  Some information can be
extracted from even that, if we have multiple digital images of that perp.

There is a specialty centroiding algorithm I helped develop in my misspent
career years which was about calculating the path of a guide star as its
image crosses a digital focal plane.  Regardless of how much the image of a
star is magnified, if you have hypothetical perfect optics, it is still a
point of light: you cannot resolve a star into an image that looks like the
sun.  They are too far away.  We don't have perfect optics, so the image is
measured as a fuzz ball.

So... the image of a point of light crosses a digital focal plane.
Individual pixels measure the number of photons (of sufficient energy) which
it sees.  By the timing of the sequence of images, you can calculate the
angular velocity of the spacecraft and from that you can calculate the
brightness of the guide star.  From that, you can look at the number of
photons in the pixel which received the most photons, then compare that with
signal received at that instant with the photons received by adjacent pixels
and subsequent pixels, and with that info, do a kind of sparse-matrix
centroiding mathemagic trick and calculate the hypothetical track of that
star image to a precision of a small fraction of the angular size of the

Do let me assure you, it works.

Think about the 10-pixel perp.  The image is useless, ja?  Ja, it is useless
if you have only one of image.  Even centroiding mathemagics won't tell you
much.  But if you have a number of images, there is a version of the
centroiding algorithm which allows us to pick off some info from the 10
pixel images, such as the perp's height and bulk, which can be very useful
information if we have a limited pool of suspects.

Digital images of perps typically have more than 10 pixels.  Use your
imagination.  Regarding the argument that such technology could lead to more
false convictions, on the contrary, if done properly it would lead to fewer
false convictions.  They aren't used as court evidence but rather to lead
the constabulary to the right perp.


More information about the extropy-chat mailing list