I agree with most of the previous comments re: opt-in and face blurring, and would like to add one more thing I find concerning about the draft:
Vague and open-ended "balance of many factors" policies invite abuse.
Especially with the danger of doxxing and RL outing involved. It's an area where malicious actors are highly motivated to rules-lawyer, twist ambiguity in their favor, and generally try to get away with it for long enough to do damage. Even when everyone's acting in good faith, would-be uploaders have an incentive to focus on the justifications and downplay the risks when they're making their judgement call.
(Facial recognition, anyone? Legal or social repercussions that wouldn't have been an issue at the time the photo was taken? Targeted harassment campaigns stumbling upon photos five years after they were uploaded? Using group shots to track down someone's friends and acquaintances?)
This is a policy that absolutely needs specific, clear-cut criteria to govern its basic use cases. Those criteria should err on the side of safety and be as futureproofed as possible. Leave "well, here are the factors that go into the decision..." for the genuine head-scratcher exception cases, if at all.
no subject
Vague and open-ended "balance of many factors" policies invite abuse.
Especially with the danger of doxxing and RL outing involved. It's an area where malicious actors are highly motivated to rules-lawyer, twist ambiguity in their favor, and generally try to get away with it for long enough to do damage. Even when everyone's acting in good faith, would-be uploaders have an incentive to focus on the justifications and downplay the risks when they're making their judgement call.
(Facial recognition, anyone? Legal or social repercussions that wouldn't have been an issue at the time the photo was taken? Targeted harassment campaigns stumbling upon photos five years after they were uploaded? Using group shots to track down someone's friends and acquaintances?)
This is a policy that absolutely needs specific, clear-cut criteria to govern its basic use cases. Those criteria should err on the side of safety and be as futureproofed as possible. Leave "well, here are the factors that go into the decision..." for the genuine head-scratcher exception cases, if at all.