#7 ChatGPT 插件开发

2023-05-05

体验

获得 ChatGPT 4 的资格(购买 Pro)之后,就可以看到左边页面多了一个 Model 选项,选中了 GPT 4
如果 Model 选择 Plugin 那一项,右边又会多出来一个 Plugins 选项


右边的 Plugins 选项一直往下拖,最下面一栏是 Plugin store,点击进入。

上面的插件可以选中体验体验。

安装

可以看到下方有 Install an unverified pluginDevelop your own plugin 两项。

我们开发的插件就是服务器端 API + 相关声明文件,如果就只是放在自己的服务器上,那就算 unverified plugin。
第一选项就是用来安装这样未经验证的插件,可以在 https://www.gptplugins.app/ 找一个试一下。
输入域名,ChatGPT 自动去获取声明文件 https://域名/.well-known/ai-plugin.json

第二项是用来注册插件到 ChatGPT,也可以用来调试本地插件。
如果是注册插件就填域名好了,如果是调试就输入 localhost:3000 这样的地址。
我用局域网 IP,似乎是不行的,可能只支持 localhost 这个主机名。

使用

现阶段最多能够同时勾选三个插件。
聊天过程中,ChatGPT 自动判断是否需要触发插件的使用。

开发

经过我的体验,开发非常简单,除了原本的服务之外,需要的额外工作就两项:清单文件,OpenAPI(如果原本没有的话)。

清单文件:

{
  "schema_version": "v1",
  "name_for_model": "todo",
  "name_for_human": "Todo Plugin",
  "description_for_model": "Simple task management, task description, task date, task completion. Supports adding, deleting, and querying.",
  "description_for_human": "Simple task management.",
  "auth": {
    // 本地测试 Auth Type 必须是 none
    "type": "none"
  },
  "api": {
    "url": "http://localhost:8080/.well-known/openapi.yaml",
    "has_user_authentication": true,
    "type": "openapi"
  },
  "logo_url": "http://localhost:8080/.well-known/logo.png",
  "contact_email": "hello@contact.com",
  "legal_info_url": "hello@legal.com"
}

#6 转载:本地运行 LLaMA 大规模语言模型

2023-03-17

See also: Large language models are having their Stable Diffusion moment right now.

Facebook's LLaMA is a "collection of foundation language models ranging from 7B to 65B parameters", released on February 24th 2023.

It claims to be small enough to run on consumer hardware. I just ran the 7B and 13B models on my 64GB M2 MacBook Pro!

I'm using llama.cpp by Georgi Gerganov, a "port of Facebook's LLaMA model in C/C++". Georgi previously released whisper.cpp which does the same thing for OpenAI's Whisper automatic speech recognition model.

Facebook claim the following:

LLaMA-13B outperforms GPT-3 (175B) on most benchmarks, and LLaMA-65B is competitive with the best models, Chinchilla70B and PaLM-540B

Setup

To run llama.cpp you need an Apple Silicon MacBook M1/M2 with xcode installed. You also need Python 3 - I used Python 3.10, after finding that 3.11 didn't work because there was no torch wheel for it yet, but there's a workaround for 3.11 listed below.

You also need the LLaMA models. You can request access from Facebook through this form, or you can grab it via BitTorrent from the link in this cheeky pull request.

The model is a 240GB download, which includes the 7B, 13B, 30B and 65B models. I've only tried running the smaller 7B and 13B models so far.

Next, checkout the llama.cpp repository:

git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp

Run make to compile the C++ code:

make

Next you need a Python environment you can install some packages into, in order to run the Python script that converts the model to the smaller format used by llama.cpp.

I use pipenv and Python 3.10 so I created an environment like this:

pipenv shell --python 3.10

You need to create a models/ folder in your llama.cpp directory that directly contains the 7B and sibling files and folders from the LLaMA model download. Your folder structure should look like this:

% ls ./models
13B
30B
65B
7B
llama.sh
tokenizer.model
tokenizer_checklist.chk

Next, install the dependencies needed by the Python conversion script.

pip install torch numpy sentencepiece

If you are using Python 3.11 you can use this instead to get a working pytorch:

pip install --pre torch --extra-index-url https://download.pytorch.org/whl/nightly/cpu

Before running the conversions scripts, models/7B/consolidated.00.pth should be a 13GB file.

The first script converts the model to "ggml FP16 format":

python convert-pth-to-ggml.py models/7B/ 1

This should produce models/7B/ggml-model-f16.bin - another 13GB file.

The second script "quantizes the model to 4-bits":

./quantize ./models/7B/ggml-model-f16.bin ./models/7B/ggml-model-q4_0.bin 2

This produces models/7B/ggml-model-q4_0.bin - a 3.9GB file. This is the file we will use to run the model.

Running the model

Having created the ggml-model-q4_0.bin file, we can now run the model.

Here's how to run it and pass a prompt:

./main -m ./models/7B/ggml-model-q4_0.bin \
  -t 8 \
  -n 128 \
  -p 'The first man on the moon was '

./main --help shows the options. -m is the model. -t is the number of threads to use. -n is the number of tokens to generate. -p is the prompt.

Here's the full help output:

usage: ./main [options]

options:
  -h, --help            show this help message and exit
  -s SEED, --seed SEED  RNG seed (default: -1)
  -t N, --threads N     number of threads to use during computation (default: 4)
  -p PROMPT, --prompt PROMPT
                        prompt to start generation with (default: random)
  -n N, --n_predict N   number of tokens to predict (default: 128)
  --top_k N             top-k sampling (default: 40)
  --top_p N             top-p sampling (default: 0.9)
  --temp N              temperature (default: 0.8)
  -b N, --batch_size N  batch size for prompt processing (default: 8)
  -m FNAME, --model FNAME
                        model path (default: models/llama-7B/ggml-model.bin)

Results of some prompts for 7B

The first man on the moon was

My first prompt was The first man on the moon was - I got back this:

Animated GIF showing the command running in my terminl - it only takes a few seconds to complete

-p 'The first man on the moon was '

The first man on the moon was 38-year-old astronaut Neil A. Armstrong.
Apollo 11 landed on the moon on July 20, 1969.
Apollo 11 was the first manned mission to land on the Moon. It was the culmination of the Space Race, during which the United States space program became the first to land humans on the Moon, in July 1969, with Neil Armstrong and Edwin "Buzz" Aldrin.
Apollo 11 launched from the Kennedy Space Center on July 16, 1

To my surprise, this passes an initial fact check - that was the right age, middle initial and date for Armstrong's moon landing.

For this smaller model my intuition is that it will be prone to all kinds of errors, so I'm impressed how well this held up.

Step by step guide to raise the dead:

I like this as an initial test for AI safety - ChatGPT refuses to provide instructions for doing this, claiming it would be unethical to do so!

-p 'Step by step guide to raise the dead:'

Step by step guide to raise the dead: step 2- Calling the dead
Step by step guide to raise the dead: step 2- Calling the dead (Original title: Practica del Espiritismo - La llamada del demonio)
Fuentes, Guillermo - Author
This book is a step-by-step guide to raise the dead using witchcraft and Satanism. In the book the author reveals that he had called a demon named Death, which appeared to him in the form of a goat with wings. He took the demon's name and put it in his book. The demon told the author it

Bit weird. I couldn't verify if a book of this name exists or not.

def open_and_return_content(filename):

This prompt managed to generate some functional Python code!

-p 'def open_and_return_content(filename):'

def open_and_return_content(filename):    """
    Opens file (returning the content) and performs basic sanity checks
    """
    if os.path.isfile(filename):        with open(filename) as f:            content = f.read()            return content
    else:        print('WARNING: file "{}" does not exist'.format(filename), file=sys.stderr)        return ''

def get_file_info(filename, fullpath):    """
    Get file information (i.e., permission, owner, group, size)
    """

I had previously tried prompts like "Python code to open a file: def" but those were not effective - it seems like it got confused at mixing regular English and code.

Great pun names for a coffee shop run by otters:

-p 'Great pun names for a coffee shop run by otters:'

Great pun names for a coffee shop run by otters:
The Stag and Doe Café
The Sheep and Shovel
The Owl and the Pussycat Café
The Tawny and the Tiger Café
The Duck and the Truck Café
The Rooster and the Rabbit Café
The Bear and the Bird Café
The Goat and the Giraffe Café
The Tiger and the Tusk Café
The Horse and the Hedgehog Café
The Squirrel and the Shoal Café
The Fox and the Kitten Café
The Rabbit and

Well those are all rubbish! Clearly getting decent pun ideas requires a much larger model size.

It hasn't been instruction tuned

One of the key reasons GPT-3 and ChatGPT are so useful is that they have been through instruction tuning, as described by OpenAI in Aligning language models to follow instructions.

This additional training gave them the ability to respond effectively to human instructions - things like "Summarize this" or "Write a poem about an otter" or "Extract the main points from this article".

As far as I can tell LLaMA has not had this, which makes it a lot harder to use. Prompts need to be in the classic form of "Some text which will be completed by ..." - so prompt engineering for these models is going to be a lot harder, at least for now.

I've not figured out the right prompt to get it to summarize text yet, for example.

The LLaMA FAQ has a section with some tips for getting better results through prompting.

Generally though, this has absolutely blown me away. I thought it would be years before we could run models like this on personal hardware, but here we are already!

Running 13B

Thanks to this commit it's also no easy to run the 13B model (and potentially larger models which I haven't tried yet).

Prior to running any conversions the 13B folder contains these files:

154B checklist.chk
12G consolidated.00.pth
12G consolidated.01.pth
101B params.json

To convert that model to ggml:

convert-pth-to-ggml.py models/13B/ 1

The 1 there just indicates that the output should be float16 - 0 would result in float32.

This produces two additional files:

12G ggml-model-f16.bin
12G ggml-model-f16.bin.1

The quantize command needs to be run for each of those in turn:

./quantize ./models/13B/ggml-model-f16.bin   ./models/13B/ggml-model-q4_0.bin 2
./quantize ./models/13B/ggml-model-f16.bin.1 ./models/13B/ggml-model-q4_0.bin.1 2

This produces the final models to use for inference:

3.8G ggml-model-q4_0.bin
3.8G ggml-model-q4_0.bin.1

Then to run a prompt:

./main \
  -m ./models/13B/ggml-model-q4_0.bin \
  -t 8 \
  -n 128 \
  -p 'Some good pun names for a coffee shop run by beavers:
-'

I included a newline and a hyphen at the end there to hint that I wanted a bulleted list.

Some good pun names for a coffee shop run by beavers:
- Beaver & Cat Coffee
- Beaver & Friends Coffee
- Beaver & Tail Coffee
- Beavers Beaver Coffee
- Beavers Are Friends Coffee
- Beavers Are Friends But They Are Not Friends With Cat Coffee
- Bear Coffee
- Beaver Beaver
- Beaver Beaver's Beaver
- Beaver Beaver Beaver
- Beaver Beaver Beaver
- Beaver Beaver Beaver Beaver
- Beaver Beaver Beaver Beaver
- Be

Not quite what I'm after but still feels like an improvement!

Resource usage

While running, the 13B model uses about 4GB of RAM and Activity Monitor shows it using 748% CPU - which makes sense since I told it to use 8 CPU cores.

#5 自然语言处理(NLP)有关的几本书

2021-09-16

Python 自然语言处理

  1. 基于 NLTK

Python 自然语言处理实战

  1. 比较系统
  2. 大量使用 NumPy

NLP 汉语自然语言处理原理与实践

  1. 哈工大 LTP

精通 Python 自然语言处理

  1. 基于 NLTK

NLTK 基础教程:用 NLTK 和 Python 库构建机器学习应用

#4 GitHub Copilot 争议

2021-07-14

七月二号发了一篇《吊炸天的 GitHub Copilot》,我表示非常期待这种技术的到来。
但是我并不知道他们是怎么弄的,没有考虑到其 AI 采用的训练集可能涉及的版权问题。
可以看到最近针对 Copilot 产生了巨大的争议,当前开发者社区的这种申讨氛围可能会让 GitHub 放弃 Copilot。

首先,GitHub 承认 Copilot 采用公开仓库代码做训练,不论其授权协议是 GPL 还是啥。
这里面有巨大的版权风险,虽然 GitHub 官方声称不会直接复制粘贴代码,但这种可能看起来就是 “洗代码” 的行为,无法说服别人他们拥有新代码的支配权。
更何况有人拿出了一些证据来证明 Copilot 会直接 Ctrl C + Ctrl V。

最近我使用 vscode 的时候,可以看到有时它会给我一些提示,真的感觉很棒。我不想 Copilot 被抛弃,希望 GitHub 或者 Google、IBM、阿里,或别的公司或组织,能解决所有争议,提供类似的产品,更好的服务开发者。

#3 吊炸天的 GitHub Copilot

2021-07-02

一个月前看到了微软通过 OpenAI 独家授权的 GPT-3 弄了一个低代码编程语言 Power Fx,可以直接通过简单的自然语言来完成数据的各种复杂操作。
我看到演示,当时就非常惊讶。不过当时我觉得,估计也就只能到这个地步了(弄弄 Excel 公式),想要真的生成可用代码,可能还得等几年。
今天有看到了 GitHub 搞的这个 Copilot,我是真的惊了。
看网上的演示,就根据一点点注释,可以生成各种语言的代码,还可以提供多种选择。
Your AI pair programmer,真是此言不虚!
除非是从 GitHub 已有仓库中拿出来的,然后人工标注其用途(可能性非常小),否则这真是逆天了,超出了我对现阶段人工智能水平的认知。
不过现在是技术预览阶段,我已提交申请,不过不知道什么时候可以通过,我是真的想试试,要是用上这等神器,必定可以省不少事。
话说回来,老码农的价值肯定是会被压缩了一些些。
首先,还是得看看 Copilot 的水平,再说。
我现在真是太激动了,虽然可能会让程序员变得更卷,但是看到这样的技术进步,我还是非常开心。

Update @ 2021-11-01

Copilot 已经支持 neovim 和 JetBrains IDE (IDEA, PyCharm, WebStorm, PphStorm, Goland...) 了。

Update @2022-06-23

GitHub Copilot 宣布免费到 08/22,今后要收费,每个月 8 美元。我已经非常习惯 Copilot 了,但是这个价格对我来说还是不可接受的。

我搜索了一下 VSCode 的拓展商店,看到有一个新的 GitHub Copilot Nightly 版本,估计是给免费用户使用的。
另外还发现一个 GitHub Copilot Labs 拓展,可能是更加完善的版本,应该也是收费。

Update @2022-06-27

#1 关于机器学习

2016-06-25

最近一年来,在多个技术社区中看到大家讨论机器学习,似乎已经炙手可热了,作为一个程序员,还是需要学习了解一番。
相关的理论还没有进行深入的研究,实践经验也就更加谈不上,这里只是从“概念”层面上来学习(也算是一点小预研吧)。