Chatbot Arena and Leaderboard

   The people at have Chatbot Arena   where you can give a question, see answers from two different bots and rate them.   Only after are you told which bots they were.    Using these human rated competitions between random bots, they can give ratings to the different bots.  As of May 3rd 2023 this is what their Leaderboard looked like: I do think that Vicuna is the best of these.   So I think the results are right.   And I like the method.   Hope they keep doing this.   It will make it easier to tell which is the best open source bot quickly.    

FreedomGPT Install

  This was the easiest install so far.  There was some simple sign up.  We went to and downloaded the Windows installer.   We ran that and it downloaded about 4GB and installed everything. The special selling point of this chatbot is that it has minimal filtering.   Our testing confirms this.   It is reasonably fast on our machine.

ChatGLM6b Install

  The ChatGLM6b model has impressive results and can run on our current 12 GB GPU so we decided to download and install it while waiting for our new 48 GB GPU to come on the boat to Anguilla.   Note it was trained on both Chinese and English and is still impressive in English. To download the model from we made a unsecure script that you can download and then use.  It is unsecure as it will download and run code so don't use on a computer with any secrets or important stuff.  To our script to download maybe 12GB with all the weights etc. do:  python3 THUDM/chatglm-6b Then we used the English README file form GitHub as instructions and from the right directory to line up with the huggingface  weights did: git clone Our GPU can only handle INT8 version of this model so we had to change one line of to from   model = AutoModel.from_pretrained("THUDM/chatglm-6b", trus

AI Accelerators and GPUs

  We have Vicona running on our CPU but it is slow and the answers seem of lower quality.   Our current GPU does not have enough memory to run Vicona.   So we have a sudden interest in hardware to run LLMs. I have ordered a NVIDIA RTX A6000 GPU which has 48GB.  One disadvantage of living on a tropical island is that it takes awhile for boats to bring stuff from Amazon. This is a fair amount of money and it seems at least some day there will be AI accelerators just for running models that will be lower cost.   It is not clear they are a good answer today.   If anyone knows of an AI Accelerator that would work for Vicona please let me know and I would like to buy one and try it out. . The AI accelerators can do far more calculation with less hardware and power than GPUs.   For example, the Hailo-8™ M.2 AI Acceleration Module sounds amazing.  The Falcon-H8 uses several of these.   But it is not clear if getting a LLM like Vicona to run on this is possible. The GroqCard™ Accelerator al

Vicuna Install

  Get Linux on Windows The official instructions for vicuna are here .   We had some trouble on Windows so the first thing we did was get Linux onto our Windows machine.   In a Power Window type:    wsl --install    After this we had to reboot the computer and then it will ask for a Linux user name and password.    After this the terminal icon will have an Ubuntu option for starting terminals.    Start a linux terminal and continue with everything below in linux. Get Vicuna git clone FastChat.git cd FastChat pip3 install --upgrade pip  pip3 install -e . Get Weights cd mkdir vicuna.weights Use browser to go to eachadea/vicuna-13b/resolve/ main/ Click download next to each file.  Put these in directory vicuna.weights (note really saved to Windows directory and then moved from /mnt/c/..) Run Vicuna cd cd FastChat python3 -m fastchat.serve.cli --model-name ~/vicuna.weights --device cpu Results This got it running on our CPU.   We would

Vince and ChatGPT4 discuss interstellar asteroid mission

Model: GPT-4  -  Apr 4, 2023 Vince:  As far as we currently know, the only life or intelligence in the universe is all located on Earth. Would be nice to see some headed far away. However, we don't really have the tech to do that now. I am exploring the idea of sending some machines with AI (no biological life) on a mission to land on an interstellar asteroid that was passing through our solar system. There have been 2 interstellar asteroids that we know of that pass through our solar system in the last decade. As we get better telescopes I expect we will see interstellar asteroids even more often. The idea is that it could start out with technology not much more advanced than current levels so we could launch reasonably soon and have more advanced tech over time with communications from Earth. Initially power could come from an RTG. They could start out some ability to tunnel in the asteroid and refine metals. They might have 3D printers to make some parts. They could set up