![]() ![]() Client now tests for CUDA before creating a pipeline. Update 2.5:įixing GPU support broke CPU support. It's been added to requirements.txt on Git, or you can install it from command line with:įixed a bug that was causing GPTNeo models to not utilize the GPU when CUDA is available. ![]() If you grabbed the release version and tried to run one of the GPT-Neo models, transformers would not download it due to having a pytorch requirement. I have currently only tested on Windows with Firefox and Chrome. Models can be run using CPU, or GPU if you have CUDA set up on your system instructions for this are included in the readme. You can also now host a GPT-Neo-2.7B model remotely on Google Colab and connect to it with KoboldAI. API requests are sent via HTTPS/SSL, and stories are only ever stored locally. I've also put in support for InferKit so you can offload the text generation if you don't have a beefy GPU. Supports loading custom GPTNeo/GPT2 models such as Neo-horni or CloverEdition. Local Save & Load Modify generator parameters (temperature, top_p, etc) Author's NoteĬurrently supports local AI models via Transformers/Tensorflow: KoboldAI currently provides a web interface for basic AID-like functions: Copied from original post over at AIDungeon_2.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |