[ExI] online resources for identifying symbols

BillK pharos at gmail.com
Thu Dec 12 12:14:35 UTC 2013


On Thu, Dec 12, 2013 at 7:10 AM, spike wrote:
> Here’s a fun and shocking game:  take a selfie, paste the photo into Google Images
> and search.  It came back with 24 of the craziest looking yahoos I ever saw, oy vey.
>  I don’t think I look like any of them.  Do you?  And what’s with the black woman third
> row last image?  I know I don’t look like her, no way!  She has such pretty white straight
> teeth.  I see a vague resemblance to the bottom row, fourth guy, or top row second guy,
> possibly third row first guy.  But the rest of these, forget it, I look less like them than I
> look like the drunk cow.
>
>

Image processing is still a work in progress.

Google says:
When you upload an image to Search by Image, the algorithms analyze
the content of the image and break it down into smaller pieces called
“features”. These features try to capture specific, distinct
characteristics of the image - like textures, colors, and shapes.
Features and their geometric configuration represent the computer’s
understanding of what the image looks like.

These features are then sent to our backend servers and compared
against the billions of images in our index to see if a good match
exists. When the algorithm is very confident that it’s found a
matching image, you’ll see a “best guess” of what your image is on the
results page. Whether or not we have a best guess, you’ll also see
results for images that are visually similar -- though they may not be
related to your original image.

With the recent launch of the Knowledge Graph, Google is starting to
understand the world the way people do. Instead of treating webpages
as strings of letters like “dog” or “kitten,” we can understand the
concepts behind these words. Search by Image now uses the Knowledge
Graph: if you search with an image that we’re able to recognize, you
may see an extra panel of information along with your normal search
results so you can learn more. This could be a biography of a famous
person, information about a plant or animal, or much more.
--------------


So I would say that Google recognised your image as a human face and
matched with other human faces. It then decided that it couldn't find
any exact matches (no other fan pictures of you on the web) and
started working through the features list to get near matches. Colour
seems to be low on the significance priority. As the results are all
full-frontal faces, position and presence of features seems to take
precedence.

There is ongoing research on facial recognition software. NSA, police
and security people are already testing / using this. There is some
opposition to making facial recognition software generally available
on the web, as it would be invaluable to stalkers and trouble makers.
Doing a search using someone's face to find their embarrassing photos
from years past seems to be restricted to security personnel at
present.


BillK




More information about the extropy-chat mailing list