Showing posts from April, 2023

FreedomGPT Install

  This was the easiest install so far.  There was some simple sign up.  We went to and downloaded the Windows installer.   We ran that and it downloaded about 4GB and installed everything. The special selling point of this chatbot is that it has minimal filtering.   Our testing confirms this.   It is reasonably fast on our machine.

ChatGLM6b Install

  The ChatGLM6b model has impressive results and can run on our current 12 GB GPU so we decided to download and install it while waiting for our new 48 GB GPU to come on the boat to Anguilla.   Note it was trained on both Chinese and English and is still impressive in English. To download the model from we made a unsecure script that you can download and then use.  It is unsecure as it will download and run code so don't use on a computer with any secrets or important stuff.  To our script to download maybe 12GB with all the weights etc. do:  python3 THUDM/chatglm-6b Then we used the English README file form GitHub as instructions and from the right directory to line up with the huggingface  weights did: git clone Our GPU can only handle INT8 version of this model so we had to change one line of to from   model = AutoModel.from_pretrained("THUDM/chatglm-6b", trus

AI Accelerators and GPUs

  We have Vicona running on our CPU but it is slow and the answers seem of lower quality.   Our current GPU does not have enough memory to run Vicona.   So we have a sudden interest in hardware to run LLMs. I have ordered a NVIDIA RTX A6000 GPU which has 48GB.  One disadvantage of living on a tropical island is that it takes awhile for boats to bring stuff from Amazon. This is a fair amount of money and it seems at least some day there will be AI accelerators just for running models that will be lower cost.   It is not clear they are a good answer today.   If anyone knows of an AI Accelerator that would work for Vicona please let me know and I would like to buy one and try it out. . The AI accelerators can do far more calculation with less hardware and power than GPUs.   For example, the Hailo-8™ M.2 AI Acceleration Module sounds amazing.  The Falcon-H8 uses several of these.   But it is not clear if getting a LLM like Vicona to run on this is possible. The GroqCard™ Accelerator al

Vicuna Install

  Get Linux on Windows The official instructions for vicuna are here .   We had some trouble on Windows so the first thing we did was get Linux onto our Windows machine.   In a Power Window type:    wsl --install    After this we had to reboot the computer and then it will ask for a Linux user name and password.    After this the terminal icon will have an Ubuntu option for starting terminals.    Start a linux terminal and continue with everything below in linux. Get Vicuna git clone FastChat.git cd FastChat pip3 install --upgrade pip  pip3 install -e . Get Weights cd mkdir vicuna.weights Use browser to go to eachadea/vicuna-13b/resolve/ main/ Click download next to each file.  Put these in directory vicuna.weights (note really saved to Windows directory and then moved from /mnt/c/..) Run Vicuna cd cd FastChat python3 -m fastchat.serve.cli --model-name ~/vicuna.weights --device cpu Results This got it running on our CPU.   We would

Vince and ChatGPT4 discuss interstellar asteroid mission

Model: GPT-4  -  Apr 4, 2023 Vince:  As far as we currently know, the only life or intelligence in the universe is all located on Earth. Would be nice to see some headed far away. However, we don't really have the tech to do that now. I am exploring the idea of sending some machines with AI (no biological life) on a mission to land on an interstellar asteroid that was passing through our solar system. There have been 2 interstellar asteroids that we know of that pass through our solar system in the last decade. As we get better telescopes I expect we will see interstellar asteroids even more often. The idea is that it could start out with technology not much more advanced than current levels so we could launch reasonably soon and have more advanced tech over time with communications from Earth. Initially power could come from an RTG. They could start out some ability to tunnel in the asteroid and refine metals. They might have 3D printers to make some parts. They could set up

Open Source ChatBots

  ChatGPT   2022/11/30.   This was not open source but a challenge to the open source community. LLaMA 2023/02/23    Meta/Facebook.   You need to fill out some paperwork and get approved to download this code.  If you use a torrent and only want 13B be sure not to download the others as it gets huge.   Alpaca   2023/03/13   Stanford.  Created from LLaMA using training against ChatGPT.   The training cost was about $600.   GPT4ALL 2023/03   This has all the files needed to run and is open source.   A bit over 4GB for the download. OpenChatKit 2023/03 ColossalAI 2023/03 is another open source chatbot recently released. Dolly - 2023/03/24 trained on one machine in 30 minutes.   Dolly was the name of a cloned sheep.   Guanaco 2023/03  Raven-RWKV-7B   This uses RNN, or Recurrent Neural Network and not transformers like most LLMs. OpenAsisstant   To release Apr 15, 2023. Demo Vicuna 2023/03   You can try Vicuna here .  The paper on Vicuna explains that they trained on sample chat c