Yeah, but there are many open issues on GitHub related to these settings not working right. I’m using the API, and just couldn’t get it to work. I used a request to generate a json file, and it never generated one longer than about 500 lines. With the same model on vllm, it worked instantly and generated about 2000 lines
Take a look at NVIDIA Project Digits. It’s supposed to release in May for 3k usd and will be kind of the only sensible way to host LLMs then:
Im sorry but in times of passwords being cracked by literal dictionary attacks do you think it would be so hard to come up with a list that is good enough?
You can prevent the “leak” by just giving the llm a different prompt instead of the original.
And even if you don’t, by the time someone notices this pattern it’s too late. Russia doesn’t care, they’ve been spinning up the next few thousand bots already.
All that matters in the end is what most people saw, and for that you really don’t need to optimize much with something that is so easily scaled
Oof I’m sorry, sounds super bad. It’s interesting because I think the frontal lobe is exactly what would make someone overthink stuff or worry too much. So, I’m still considering it ;)