Improving the inference performance of LLM with code
Abstract
Large Language Models (LLMs) have shown exceptional generative abilities in various
natural language and generation tasks. Large language models (LLMs) have demonstrated
remarkable performance on a variety of natural language tasks based on just a few examples
of natural language instructions, reducing the need for extensive feature engineering. How
ever, LLM is relatively weaker in reasoning and problem-solving abilities. We propose a new
construction that solves the problem of insufficient logical mathematics and logical ability.