It really isn’t that hard if anything like a silhouette of mountains are in the background and you have a couple of rough hints that give you an idea where to start or how to narrow down possible locations, no AI needed.
You’re misunderstanding the post. It’s not about whether or not someone could guess your location from a picture. It’s about the automation thereof. As soon as that is possible it becomes another viable vector to compromise your privacy.
And you misunderstand my point, it always has been a way to compromise your privacy. Privacy matters most in the individual case, with people who know you. If you e.g. share a picture taken at your home (outside or looking out of the window in the background) with a friend online you always had to assume that they could figure out where you lived from that if there were any of those kinds of features in there.
Sure, companies might be able to do it on a larger scale but honestly, AI is just too inefficient for that right now, as in the energy-cost required to apply it to every picture you share just in case your location might be useful isn’t worth it yet.
Privacy matters most in the individual case, with people who know you.
That statement is subjective at best. My friends and coworkers knowing where I live certainly isn’t my concern. In today’s day and age privacy enthusiasts are definitely more scared of corpos and governments.
isn’t worth it yet.
You’re thinking too small. Just in the context of the e2ee ban planned in europe, think what you could do. The new law is set to scan all your messages before/after sending for specific keywords. Imagine you get automatically flagged and now an AI is scanning all your pictures for locations and contacts and what not. Just the thought that might be technically possible is scary as hell.
Governments won’t scan all your pictures to figure out who you are, they are just going to ask (read: legally force) the website/hoster where you posted that picture for your IP address and/or payment info and then do the same with your ISP/payment provider to convert that into your RL info to figure out who you are.
And you might not be worried about your RL friends or coworkers but what about people you meet online? Everyone able to see your post on some social media site?
Nobody is going to scan all the pictures you post for some information that is going to be valid for a long time after it is discovered once. Governments and corporations have had the means to discover who you are once for a long time.
True, but that just turns into a cat an mouse game. Also, one the photo is up, the background doesn’t change how its blurred with time --> wait long enough and a technique to unblur will be developed.
I do remember 1-2 years ago there is a paper (or model?) that reverse blured images. It’s similar to how ML based object remover and inpainting works. Granted it only works for specific blurring algo.
Some blues are reversible, and some aren’t. Some of them do a statistical rearrangement of the data in the area being blurred that’s effectively reversible.
Think shredding a document. It’s a pain and it might take a minute, but it’s feasible to get the original document back, give or take some overlapping edges and tape.
Other blurs combine, distort, and alter the image contents such that there’s nothing there to recombine to get the original.
A motion blur or the typical “fuzzy” blur can be directly reversed for the former, and statistical techniques and AI tools can be used on the later to reconstruct, because the original data is still there, or there enough that you can make guesses based on what’s there and context.
Pixelating the area does a better job because it actually deletes information as opposed to just smearing it around, but tools can still pick out lines and shapes well enough to make informed guesses.
Some blurs however create a random noise over the area being blurred, which is then tweaked to fit the context of whatever was being blurred.
Something like that is impossible to reverse because the information simply is not there.
It’s like using generative AI to “recover” data cropped from an image. At that point it’s no longer recovery, but creation of possible data that would fit there.
The tools aren’t magical, they’re still ultimately bound by the rules of information storage.
You do realize that a lot of image recognition was done on scaled down images? Some techniques would even blur the images on purpose to reduce the chance of confusion. Hell, anti-aliasing makes text seem more readable by adding targeted blur.
Deblurring is guessing and if you have enough computing power with some brain power (or AI), you can reduce the number of required guesses by erasing improbable guesses.
It really isn’t that hard if anything like a silhouette of mountains are in the background and you have a couple of rough hints that give you an idea where to start or how to narrow down possible locations, no AI needed.
You’re misunderstanding the post. It’s not about whether or not someone could guess your location from a picture. It’s about the automation thereof. As soon as that is possible it becomes another viable vector to compromise your privacy.
And you misunderstand my point, it always has been a way to compromise your privacy. Privacy matters most in the individual case, with people who know you. If you e.g. share a picture taken at your home (outside or looking out of the window in the background) with a friend online you always had to assume that they could figure out where you lived from that if there were any of those kinds of features in there.
Sure, companies might be able to do it on a larger scale but honestly, AI is just too inefficient for that right now, as in the energy-cost required to apply it to every picture you share just in case your location might be useful isn’t worth it yet.
That statement is subjective at best. My friends and coworkers knowing where I live certainly isn’t my concern. In today’s day and age privacy enthusiasts are definitely more scared of corpos and governments.
You’re thinking too small. Just in the context of the e2ee ban planned in europe, think what you could do. The new law is set to scan all your messages before/after sending for specific keywords. Imagine you get automatically flagged and now an AI is scanning all your pictures for locations and contacts and what not. Just the thought that might be technically possible is scary as hell.
Governments won’t scan all your pictures to figure out who you are, they are just going to ask (read: legally force) the website/hoster where you posted that picture for your IP address and/or payment info and then do the same with your ISP/payment provider to convert that into your RL info to figure out who you are.
And you might not be worried about your RL friends or coworkers but what about people you meet online? Everyone able to see your post on some social media site?
Nobody is going to scan all the pictures you post for some information that is going to be valid for a long time after it is discovered once. Governments and corporations have had the means to discover who you are once for a long time.
If I ever upload photos publicly, I will add a background blur first
There are techniques to deblur. It’s even how a prolific child sex offender was caught.
Anti Commercial-AI license
I mean I’m sure it depends on how it’s blurred.
True, but that just turns into a cat an mouse game. Also, one the photo is up, the background doesn’t change how its blurred with time --> wait long enough and a technique to unblur will be developed.
Anti Commercial-AI license
You can’t just program data that doesn’t exist into existence.
I do remember 1-2 years ago there is a paper (or model?) that reverse blured images. It’s similar to how ML based object remover and inpainting works. Granted it only works for specific blurring algo.
Some blues are reversible, and some aren’t. Some of them do a statistical rearrangement of the data in the area being blurred that’s effectively reversible.
Think shredding a document. It’s a pain and it might take a minute, but it’s feasible to get the original document back, give or take some overlapping edges and tape.
Other blurs combine, distort, and alter the image contents such that there’s nothing there to recombine to get the original.
A motion blur or the typical “fuzzy” blur can be directly reversed for the former, and statistical techniques and AI tools can be used on the later to reconstruct, because the original data is still there, or there enough that you can make guesses based on what’s there and context.
Pixelating the area does a better job because it actually deletes information as opposed to just smearing it around, but tools can still pick out lines and shapes well enough to make informed guesses.
Some blurs however create a random noise over the area being blurred, which is then tweaked to fit the context of whatever was being blurred.
Something like that is impossible to reverse because the information simply is not there.
It’s like using generative AI to “recover” data cropped from an image. At that point it’s no longer recovery, but creation of possible data that would fit there.
The tools aren’t magical, they’re still ultimately bound by the rules of information storage.
You do realize that a lot of image recognition was done on scaled down images? Some techniques would even blur the images on purpose to reduce the chance of confusion. Hell, anti-aliasing makes text seem more readable by adding targeted blur.
Deblurring is guessing and if you have enough computing power with some brain power (or AI), you can reduce the number of required guesses by erasing improbable guesses.
Anti Commercial-AI license
Is that supposed to mean something to me?
It’s still just guessing.