Top of GitHub hot list: open source version of GPT-4 code interpreter, which can install any Python library and run on local terminal
ChatGPT's code interpreter can also be run on your own computer.
A great master just released a local version of the code interpreter on GitHub, and soon reached the top of the GitHub hot list with 3k + stars.
Not only does GPT-4 have the original functions, but the key is that it can also be connected to the Internet .
After the news of ChatGPT's "disconnection" came out, it caused an uproar, and it lasted for several months.
The networking feature has been silent for months, and now there is a solution.
Since the code runs locally, it solves many other problems of the web version besides networking:
Only 50 messages can be sent in 3 hours
Limited number of supported Python modules
There is a limit to the file size that can be processed and cannot exceed 100MB.
After closing the session window, the previously generated files will be deleted
If there is no API, you can also replace the model with the open source Code LLaMa.
After the launch of this code interpreter, some netizens soon expressed their expectation for a wave of web versions:
So let's take a look at how this native code interpreter looks like!
Let GPT "reconnect"
Now that the API of GPT-4 is called, the functions supported by GPT-4 can be used naturally, and of course Chinese is also supported.
The functions of GPT itself will not be shown in detail here.
However, it is worth mentioning that with the code interpreter, the mathematical level of GPT has been improved several grades.
So here we use a difficult derivation problem to test it, the title is f(x)=√(x+√(x+√x)).
Emmm... This result is a bit abstract, but it should be a question of prompt words, let's modify it:
Then we see this result:
This formula looks different from the standard answer, but is it a problem with the format? We verified it:
The result is correct!
The next step is to enter the highlight, let's see if the networking function of this code interpreter is a gimmick:
For example, we want to see what news is there recently.
The program will first check whether the necessary modules are installed, if not, it will be installed automatically, and then start to pull the webpage.
I have to say that reading the entire web page once, if it is not running locally, looking at the code scrolling on the screen, it is indeed a little trembling...
Then the program will analyze what field the news title is stored in the netizen, and extract it.
Fortunately, after a lot of tossing, we finally got the desired result:
In addition to letting it search by itself, you can also give it a specific web page to analyze:
After another crazy load, the code interpreter successfully reproduced its self-introduction.
Then there is an online version that is a replica of ChatGPT. Are there any more advanced functions?
Of course! For example, we want to adjust a system setting, but don't know how to do it.
If we use the web version, we will most likely see a long list of text instructions, but now we can just hand it over to the code interpreter directly.
Instead of giving a long and incomprehensible tutorial, it automatically runs the code and gets it right in one step.
In addition to letting GPT-4 generate code, some tools in the code repository can also be called through it.
For example, if you want to add subtitles to a video, you can call the ready-made speech recognition module on replicate.
Since there is no ready-made material at hand, here is a DEMO given by the developer:
The code running process is performed locally, so there is no need to worry about the video size exceeding the limit.
In short, after running it, we can see the subtitles appear below the video:
Similarly, you can use this feature to generate and modify documents or pictures, or call ControlNet to generate animations from a static picture...
In theory, as long as the performance is sufficient, Python can do everything it can do.
So, how can you experience this local code interpreter?
The author posted a Colab note on the GitHub project page (see the link at the end of the article), and qualified netizens can go in and experience it directly.
Local installation is also very simple (provided that Python is installed), only one line of "pip install open-interpreter" code is needed.
After installation, enter "interpreter" in the terminal to start it directly.
This is the API that the program will ask to enter GPT-4. If there is only 3.5, use "interpreter --fast" when starting.
If you don't have 3.5, you can directly press Enter at this step, or directly enter "interpreter --local" at startup to switch to Code-LLaMA.
It includes three versions of 7B, 13B and 34B. The smaller the scale, the faster the speed, and the larger the result, the more accurate.
If Code-LLaMA is not installed, it can be installed automatically by following the instructions of the program.
In addition, by default, confirmation is required before running the code after it is generated. If you do not want to confirm every time, you can add "space - y" after the command at startup.
The commonly used commands are introduced here. If you want to know more advanced gameplay, you can refer to the author's Colab notes.
Try it out if you like it!
GitHub project page: https://github.com/KillianLucas/open-interpreter