I will have to write a new one. That guide was a bit outdated and I still haven't got the tool in quite a stable enough place. I still make a lot of changes so it's hard to have a guide that works with the most recent stuff. The tool is in a usable state but you need some programming experience to get everything set correctly.Does anyone have a guide?
The guide on the page has the wrong photos and I can't see how the process goes :c
Interesting GUI you have there.
This is currently a collection of broken junk. When I have something functional, I'll post the source code. I'm also hoping to integrate Gemini into it.Interesting GUI you have there.
NICE!This is currently a collection of broken junk. When I have something functional, I'll post the source code. I'm also hoping to integrate Gemini into it.
For Gemini, specifically gemini-1.5-pro-exp-0801. I have developed my own unique prompt and method for bypassing the "Other" safety protocols. I'm curious to learn about yours. Would you be interested in exchanging information via private message?NICE!
Sadly, it's still filtered, but I found a 'nice' workaround that seems to work decently.
I don't mind.For Gemini, specifically gemini-1.5-pro-exp-0801. I have developed my own unique prompt and method for bypassing the "Other" safety protocols. I'm curious to learn about yours. Would you be interested in exchanging information via private message?
Hi, question, have you thought about adapting the tool so that local models can be used? There are a lot of strong models out now, as good as GPT4, and many of them can be run locally, even with 8GB, let alone 24GB.Therefore if you like my MTLs and want to see me switch to GPT4, consider supporting me on Patreon. My first goal is to shoot for $200 per translation which would cover the cost for smaller games.
It might happen one day but for now I'm mostly focused on translation. Currently as long as the local model API supports the OpenAI format it should work fine with a little bit of tinkering with the modules.Hi, question, have you thought about adapting the tool so that local models can be used? There are a lot of strong models out now, as good as GPT4, and many of them can be run locally, even with 8GB, let alone 24GB.
I really would like to know this too, I also want to post it in a thread in the "tips, tutorials, and tools" sectionFor Gemini, specifically gemini-1.5-pro-exp-0801. I have developed my own unique prompt and method for bypassing the "Other" safety protocols. I'm curious to learn about yours. Would you be interested in exchanging information via private message?
they have rate limits, I don't think you can use them like thishi is possible you can create same tool or add feature using groq api key and google gemini flash ? both of them offer free api at moment
whats the name of models? I wouldnt mind trying them get to work with this tool (if they really are as good as gpt4).Hi, question, have you thought about adapting the tool so that local models can be used? There are a lot of strong models out now, as good as GPT4, and many of them can be run locally, even with 8GB, let alone 24GB.