I noticed a bit of panic around here lately and as I have had to continuously fight against pedos for the past year, I have developed tools to help me detect and prevent this content.

As luck would have it, we recently published one of our anti-csam checker tool as a python library that anyone can use. So I thought I could use this to help lemmy admins feel a bit more safe.

The tool can either go through all your images via your object storage and delete all CSAM, or it canrun continuously and scan and delete all new images as well. Suggested option is to run it using --all once, and then run it as a daemon and leave it running.

Better options would be to be able to retrieve exact images uploaded via lemmy/pict-rs api but we’re not there quite yet.

Let me know if you have any issue or improvements.

EDIT: Just to clarify, you should run this on your desktop PC with a GPU, not on your lemmy server!

    • db0OP
      link
      fedilink
      English
      112 years ago

      If it’s using object storage, yes

  • @BeAware@lemm.ee
    link
    fedilink
    English
    42 years ago

    Now if you can make this work with mastodon, i’d be eternally grateful.😁

    • db0OP
      link
      fedilink
      English
      82 years ago

      It’s software agnostic. So long as you’re storing you’re images in object storage, it should work

  • @Decronym@lemmy.decronym.xyzB
    link
    fedilink
    English
    36
    edit-2
    2 years ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    CF CloudFlare
    CSAM Child Sexual Abuse Material
    DNS Domain Name Service/System
    HTTP Hypertext Transfer Protocol, the Web
    nginx Popular HTTP server

    4 acronyms in this thread; the most compressed thread commented on today has 6 acronyms.

    [Thread #88 for this sub, first seen 28th Aug 2023, 22:25] [FAQ] [Full list] [Contact] [Source code]

  • Rentlar
    link
    fedilink
    English
    142 years ago

    Hey db0 thanks for putting in extra effort to help the community (as you have multiple times) when big issues like this crop up on Lemmy.

    Despite being a pressing issue this is one that people also are a little reluctant to help solve because of fear of getting in trouble themselves. (How can a server admin develop a method to detect and remove/prevent CSAM distribution without accessing known examples which is extremely illegal?)

    Another time being the botspam wave where you developed Overseer in response very quickly. I’m hoping here too devs will join you to work out how to best implement the changes into Lemmy to combat this problem.

  • @bdonvr@thelemmy.club
    link
    fedilink
    English
    222 years ago

    Worth noting you seem to be missing dependencies in requirements.txt notably unidecode and strenum

    Also that this only works with GPU acceleration on NVidia (maybe, I messed around with trying to get it to work with AMD ROCm instead of CUDA but didn’t get it running)

    • @Rescuer6394@feddit.nl
      link
      fedilink
      English
      72 years ago

      to run on ROCm, you need a specific version of pytorch.

      but it is still in beta, i would not expect it to run well

      • @bdonvr@thelemmy.club
        link
        fedilink
        English
        42 years ago

        I know, I tried and ran into some problems there. I just pulled out my NVidia laptop and got it to go (slowly)

    • db0OP
      link
      fedilink
      English
      102 years ago

      Ah thanks. I’ll add them

    • @Rescuer6394@feddit.nl
      link
      fedilink
      English
      122 years ago

      the model under the hood is clip interrogator, and it looks like it is just the torch model.

      it will run on cpu, but we can do better, an onnx version of the model will run a lot better on cpu.

      • db0OP
        link
        fedilink
        English
        122 years ago

        sure, or a .cpp. But it will still not be anywhere near as good as a GPU. However it might be sufficient for something just checking new images

        • @relic_@lemm.ee
          link
          fedilink
          English
          22 years ago

          I’m not really convinced that a GPU backend is needed. Was there ever a comparison of the different CLIP model variants? Or a graph optimized / quantized ONNX version?

          I think the proposed solution makes a lot of sense for the task at hand if it were integrated on the pic-rs end, but it would be worth investigating further improvements if it were on the lemmy server end.

          • db0OP
            link
            fedilink
            English
            52 years ago

            For scanning all existing images, trust me a good GPU is necessary. I’m scanning all my backend on a 4090 with 400 threads and I’m still only halfway through after 4 hours.

            For scanning newly uploaded images, a CPU might be sufficient but the users might get annoyed at the wait times.

    • db0OP
      link
      fedilink
      English
      442 years ago

      It will be atrocious. You can run it, but you’ll likely be waiting for weeks if not months.

  • @chrisbit@leminal.space
    link
    fedilink
    English
    10
    edit-2
    2 years ago

    Thanks for releasing this. After doing a --dry_run can the flagged files then be removed without re-analysing all images?

    • db0OP
      link
      fedilink
      English
      142 years ago

      Not currently supported. It’s on my to-do

  • 𝕯𝖎𝖕𝖘𝖍𝖎𝖙
    link
    fedilink
    English
    52 years ago

    Just going to argue on behalf of the other users who know apparently way more than you and I do about this stuff:

    WhY nOt juSt UsE thE FBi daTaBaSe of CSam?!

    (because one doesn’t exist)

    (because if one existed it would either be hosting CSAM itself or showing just the hashes of files - hashes which won’t match if even one bit is changed due to transmission data loss / corruption, automated resizing from image hosting sites, etc)

    (because this shit is hard to detect)

    Some sites have tried automated detection of CSAM images. Youtube, in an effort to try to protect children, continues to falsely flag 30 year old women as children.

    OP, I’m not saying you should give up, and maybe what you’re working on could be the beginning of something that truly helps in the field of CSAM detection. I’ve got only one question for you (which hopefully won’t be discouraging to you or others): what’s your false-positive (or false-negative) detection rate? Or, maybe a question you may not want to answer: how are you training this?

    • db0OP
      link
      fedilink
      English
      4
      edit-2
      2 years ago

      I’m not training it. Im using publicly available clip models.

      The false positive rate is acceptable. But my method is open source so feel free to validate on your end

      • 𝕯𝖎𝖕𝖘𝖍𝖎𝖙
        link
        fedilink
        English
        52 years ago

        Acceptable isn’t a percentage, but I see in your opinion that it’s acceptable. Thanks for making your content open source. I do wish your project the best of luck. I don’t think I have what it takes to validate this myself but if I end up hosting an instance I’ll probably start using this tool more often myself. It’s better than nothing at at present I have zero instances but also zero mods lined up.

    • db0OP
      link
      fedilink
      English
      42 years ago

      This shouldn’t run on your lemmy server (unless your lemmy server has a gpu)

        • db0OP
          link
          fedilink
          English
          12 years ago

          I don’t know your setup, but unless it’s a very cheap GPU, it would be a bit of a waste to use it only for this purpose. But up to you

  • Dandroid
    link
    fedilink
    English
    9
    edit-2
    2 years ago

    Thank you for this! Awesome work!

    By the way, this looks easy to put in a container. Have you considered doing that?

    • db0OP
      link
      fedilink
      English
      82 years ago

      I don’t speak docker, but anyone can send a PR

      • Dandroid
        link
        fedilink
        English
        9
        edit-2
        2 years ago

        I’ll try it out today. I’m about to start my workday, so it will have to be in a few hours. Fingers crossed I can have a PR in about 16 hours from now.

  • @sunaurus@lemm.ee
    link
    fedilink
    English
    9
    edit-2
    2 years ago

    Any thoughts about using this as a middleware between nginx and Lemmy for all image uploads?

    Edit: I guess that wouldn’t work for external images - unless it also ran for all outgoing requests from pict-rs… I think the easiest way to integrate this with pict-rs would be through some upstream changes that would allow pict-rs itself to call this code on every image.

    • db0OP
      link
      fedilink
      English
      92 years ago

      You might be able however integrate with my AI Horde endpoint for NSFW checking between nginx and Lemmy.

      https://aihorde.net/api/v2/interrogate/async

      This might allow you to detect NSFW images before they are hosted

      Just send a payload like this

      curl -X 'POST' \
        'https://aihorde.net/api/v2/interrogate/async' \
        -H 'accept: application/json' \
        -H 'apikey: 0000000000' \
        -H 'Client-Agent: unknown:0:unknown' \
        -H 'Content-Type: application/json' \
        -d '{
        "forms": [
          {
            "name": "nsfw"
            }
        ],
        "source_image": "https://lemmy.dbzer0.com/pictrs/image/46c177f0-a7f8-43a3-a67b-7d2e4d696ced.jpeg?format=webp&thumbnail=256"
      }'
      

      Then retrieve the results asynchronously like this

      {
        "state": "done",
        "forms": [
          {
            "form": "nsfw",
            "state": "done",
            "result": {
              "nsfw": false
            }
          }
        ]
      }
      

      or you could just run the nsfw model locally if you don’t have so many uploads.

      if you know a way to pre-process uploads before nginx sends them to lemmy, it might be useful

    • db0OP
      link
      fedilink
      English
      102 years ago

      Exactly. If the pict-rs dev allowed us to run an executable on each image before accepting it, it would make things much easier

    • db0OP
      link
      fedilink
      English
      192 years ago

      Currently I delete on PIL exceptions. I assume if someone uploaded a .zip to your image storage, you’d want it deleted

      • @Starbuck@lemmy.world
        link
        fedilink
        English
        92 years ago

        The fun part is that it’s still a valid JPEG file if you put more data in it. The file should be fully re-encoded to be sure.

        • db0OP
          link
          fedilink
          English
          32 years ago

          In that case, PIL should be able to read it, so no worries

          • @Starbuck@lemmy.world
            link
            fedilink
            English
            62 years ago

            But I could take ‘flower.jpg’, which is an actual flower, and embed a second image, ‘csam.png’ inside it. Your scanner would scan ‘flower.jpg’, find it to be acceptable, then in turn register ‘csam.png’. Not saying that this isn’t a great start, but this is the reason that a lot of websites that allow uploads re-encode images.

            • db0OP
              link
              fedilink
              English
              82 years ago

              my pict-rs already re-encodes everything. This is already a possibility for lemmy admins