• 0 Posts
  • 5 Comments
Joined 1 year ago
cake
Cake day: December 14th, 2023

help-circle
  • I think you are misunderstanding my mention of C2PA, which I only mentioned offhand as an example of prior art when it comes to digital media provenance that takes AI into account. If C2PA is indeed not about making a go/no-go determination of AI presence, then I don’t think it’s relevant to what OP is asking about because OP is asking about an “anti-ai proof”, and I don’t think a chain of trust that needs to be evaluated on an individual basis fulfills that role. I also did disclaim my mention of C2PA - that I haven’t read it and don’t know if it overlaps at all with this discussion. So in short I’m not misunderstanding C2PA because I’m not talking about C2PA, I just mentioned it as an interesting project that is tangentially related so that nobody feels the need to reply with “but you forgot about C2PA”.

    I’m more interested in the high-level: “can we solve this by guaranteeing the origin” question, and I think the answer to that is yes

    I think you are glossing over the possibility that someone uses Photoshop to maliciously edit a photo, adding Adobe to the chain of trust. If instead you are suggesting that only individuals sign the chain of trust, then there is no way anyone will bother looking up each random person who edited an image (let alone every photographer) so they can check if it’s trustworthy. Again I don’t think that lines up with what OP is asking for. In addition, we already have a way to verify the origin of an image - just check the source AP posting an image on their site is currently equivalent to them signing it, so the only difference is some provenance, which I don’t think provides any value unless the edit metadata is secured as I mention below. If you can’t find the source then it’s the same as an image without a signature chain. This system can’t doesn’t force unverified images to have an untrustworthy signature chain so you will mostly either have images with trustworthy signature chains that also include a credit that you can manually check or images without a source or a signature. The only way it can be useful is if checking the signature chain is easier than checking the website of the credited source, which if it requires the user to make the same determination I don’t think it will move the needle besides making it marginally easier for those who would have checked for the source anyway to check faster.

    I don’t think we need any sort of controls on defining the types of edits at all.

    I disagree, the entire idea of the signature chain appears to be for the purpose of identifying potentially untrustworthy edits. If you can’t be sure that the claimed edit is accurate, then you are deciding entirely based on the identity of the signatory - in which case storing the edit note is moot because it can’t be used to narrow down which signature could be responsible for an AI modification.

    If AP said they cropped the image, and if I trust AP, then I trust them as a link in the chain

    The thing about this is that if you trust AP to be honest about their edits, then you likely already trust them to verify the source - this is something they already do so it seems the rest of the chain is moot. To use your own example, I can’t see a world where we regularly need to verify that AP didn’t take the image that was edited by Infowars posted on facebook, crop it, and sign it with AP’s key. That is just about the only situation where I see the value in having the whole chain, but that’s not solving a problem we currently have. If you were worried that a trusted source would get their image from an untrusted source, they wouldn’t be a trusted source. And if a trusted source posts an image where it gets compressed or shared, it’ll be on their official account or website which already vouches for it.

    Worrying about MITM attacks is not a reasonable argument against using a technology. By the same token, we shouldn’t use TLS for banking because it can be compromised

    The difference with TLS is that the malicious parties are not in ownership of the endpoints, so it’s not at all comparable. In the case of a malicious photographer, the malicious party owns the hardware to be exploited. If the malicious party has physical access to the hardware it’s almost always game over.

    Absolutely, but you can prevent someone from taking a picture of an AI image and claiming that someone else took the picture. As with anything else, it comes down to whether I trust the photographer, rather than what they’ve produced.

    Yes and this is exactly the problem, it comes down to whether you trust the photographer, meaning each user needs to research the source and make up their own mind. The system would have changed nothing from now, because in both cases you need to check the source and decide for yourself. You might argue that at least with a chain of signatures the source is attached to the image, but I don’t think in practice that will change anything since any fake image will lack a signature just as how many fake images are not credited. The question OP seems to be asking is about a system that can make that determination because leaving it up to the user to check is exactly the problem we currently have.


  • I think you might be assuming that most of the problems I listed are about handling the trust of the software that made each modification - in case you just read the first part of my comment. And I’m not sure if changing the signature to a chain really addresses any of them besides having a bigger “hit list” of companies to scrutinize.

    For reference, the issues I listed included:

    1. Trusted image editors adding or replacing a signature cannot do so securely without a TPM - without it someone can memory edit the image buffer without the program knowing and have a “crop” edit signed by Adobe which replaces the image with an AI one
    2. Needs a system to grade the “types” of edits in a foolproof way - so that you can’t bypass having the image marked as “user imported an external image” by painting the imported images pixels over the original using an automated tool for example
    3. Need to prevent MITM of camera sensor data that can make the entire system moot
    4. You cannot prevent someone from taking a picture of a screen with Ai image

    There are plenty of issues with how even a trusted piece of software allows you to edit the picture, since trusted software would need to be able to distinguish between a benign edit and one adding AI. I don’t think a signature chain changes much since the chain just increases the number of involved parties that need to be vetted without changing any of the characteristics of what you are allowed to do.

    I think the main problem with the signature chain is that is that the chain by itself doesn’t allow you to attribute and particular part to and party in the chain. You will be able to see all the responsible parties but not have any way of telling which company in the chain could be responsible for signing a modification. If the chain contains Canon, gimp, and Adobe, there is no way to tell if the AI added to the image was because the canon camera was hacked or if gimp or Adobe has a workaround that allowed someone to replace the image with an AI one. I think in the case of a malicious edit, it makes less sense to allow the picture to retain the canon signature if the entire image could be changed by Adobe, essentially putting Canon’s signature reputation on the line for stuff they might not be responsible for.

    This would also bring a similar problem to the one I mentioned where there would need to be a level of trust for each piece of editing software - and you might have a world where gimp is out because nobody trusts it, so you can say goodbye to using any smaller developers image editor if you want your image to stay verified. That could be a nightmare if providers such as Facebook or others wanted to use the signature chain to prevent untrusted uploads, it would penalize using anything but Adobe products for example.

    In short I don’t think a chain changes much besides increasing the number of parties you have to evaluate complicating validation, without helping you attribute malicious edit to any party. And now you have a situation where gimp for example might be blamed for being in the chain when the vulnerability was from Adobe or Canon. My understanding of the question is that the goal is an automatic final determination of authenticity, which I think is infeasible. The chain you’ve proposed sounds closer to a “web of trust” style system where every user needs to create their own trust criteria and decide for themselves what to trust, which I think defeats the purpose of preventing gullible people from falling for AI images.


  • I didn’t think this is really feasible.

    I’ve heard of efforts (edit: this is the one https://c2pa.org/ - I haven’t read it at all so I don’t know if it overlaps with my ideas below at all) to come up with a system that digitally signs images when they are taken using a tamper resistant TPM or secure enclave built into cameras, but that doesn’t even begin to address the pile of potential attack vectors and challenges.

    For example, if only cameras can sign images, and the signature is only valid for that exact image, then editing the image in any way makes the signature invalid. So then you’d probably need image editors to be able to make signatures or re-sign the edit, assuming it’s minor (crop, color correct) but you’d need a way to prevent rogue/hacked image editors from being able to re-sign an edit that adds AI elements. So unless you want image editors to require you to have a TPM that can verify your edit is minor / not adding AI, then the image editor would be able to forge a signature on an AI edit.

    Assuming you require every image editor to run on a device with a TPM in order to re-sign edits, there’s also the problem of how you decide which edits are ok and which are too much. You probably can’t allow compositing with external images unless they are also signed, because you could just add an AI image into an originally genuine image. You also probably couldn’t stop someone from using macros to paint every pixel of an AI image on top of a genuine image using the pencil tool at 1px brush size, so you would need some kind of heuristic running inside the TPM or TEE that can check how much the image changed - and you’d have to prevent someone from also doing this piecewise (like only 1/10 of overlaying an AI image at a time so that the heuristic won’t reject the edit), so you might need to keep the full original image embedded in the signed package so the final can be checked against the original to see if it was edited too much

    You might be able to solve some of the editing vulnerabilities by only allowing a limited set of editing operations (like maybe only crop/rotate or curves), if you did that then you could not require a TPM to edit if the editing software doesn’t actually create a new signature but just saves the edits as a list of changes along side the original signed image. Maybe a system like this where you can only crop/rotate and color correct images would work for stock photos or news, but that would be super limiting for everyone else so I can’t see it really taking off.

    And if that’s not enough, I’m sure if this system was made then someone would just mitm the camera sensor and inject fake data, so you’d need to parts pair all camera sensors to the TPM, iPhone home button style (iiuc this exact kind of data injection attack is the justification for the iPhone home button fingerprint scanner parts pairing).

    Oh, and how do you stop someone from using such a camera to take a picture of a screen that has an AI image on it?


  • Yep, I’m pretty sure you can still just use spools without tags and then manually set the filament settings, but since they control the firmware and can block downgrades, they can at any time require RFID tags for it to print. And since the tags have proven to be mostly cryptographically secure, that leaves open an avenue for them to lock out third party filament. It looks like you can currently clone the tags, but in theory they could program them like printer cartridges where it will recognize when you’ve printed a full spools length from any specific RFID tag ID, and then block printing using that tag ID. That could make cloning the tags useless and force you to only buy bamboo filament just like HP and printer companies and their ink.