Directly connected to Colab, being able to code will explain whether Google Bard's "code power" can surpass ChatGPT?
Now, Bard can not only generate code and Debug, but also help you explain the code.
Although Google has been leading the global AI progress in the past ten years, in recent months Google can only try to catch up with Microsoft and OpenAI. To that end, Google even merged Google Brain and DeepMind this week.
At the end of last year, the emergence of ChatGPT triggered drastic changes in the technology industry. In February of this year, Google released Bard, a ChatGPT competitor, but people had mixed reviews about its experience. Many people have asked developers "when can it write code?", and this Friday, Bard finally added the ability to write code.
We know that many people will use Google Colab to run machine learning models, and it also comes with free cloud GPU computing power. Now people can easily export the Python code generated by Bard to Google Colab without even copying and pasting. Bard can also assist with writing functions for Google Sheets.
Previously, Google has opened Bard to users in the United States and the United Kingdom, and these users can already use all the new features of Bard directly.
Google demonstrates the effect of Bard writing code. Like ChatGPT, now Bard can generate code to complete the corresponding task according to your needs.
The ability to explain code is especially useful for beginners in programming.
In addition to generating and interpreting code, Bard can also help users debug (debug), including code generated by Bard itself. If the user finds that the code generated by Bard does not work as expected, just tell Bard: "this code didn't work, please fix it (this code didn't work, please fix it)", and Bard can assist the user in debugging.
Bard introduced code generation to apply generative AI to accelerate software development and help solve complex engineering challenges, which is a beautiful vision. But Bard's ability still needs to be improved.
Google states that Bard is still in an early experimental stage and may provide inaccurate, misleading, or false information, or may generate code that does not produce expected output, or that generates suboptimal/incomplete code. Before using the code generated by Bard, users need to carefully check the code, test and review the errors and vulnerabilities in the code.
For a language model that is being tested on a large scale, the newly launched functions will definitely encounter various problems, and Bard's coding ability is no exception.
First of all, when Bard gives the answer, it will bring its own referenced code link, which is very important for a product oriented to practical applications, and it has also been well received.
Users can say to Bard: "Can you help me implement a basic RNN and test it on dummy text data?" and then directly export the generated code into Google Colab. Sometimes, part of the code may not work properly. Ask Bard again after you find the bug, Bard will fix it, and everything seems to be working perfectly. Now we just need to check that the implementation is correct, manually and if necessary some unit tests.
Google Colab's export function is really useful
Finally, someone tried to use Bard to generate code for the ancient programming language COBOL, and the result was surprisingly good:
There has been concern that as the current cohort of COBOL programmers retires, there will be vacancies in key positions. It seems that AI can help us solve this huge problem.
However, some netizens said that Bard's ability seems to be inferior to GPT-4.
Whether the use of AI-assisted programming will eventually change the way we work requires further exploration.