Now it’s EASY to do function calling with DeepSeek R1

now-it’s-easy-to-do-function-calling-with-deepseek-r1

Function Calling with DeepSeek R1

🚀 Excited to share that node-llama-cpp now includes special optimizations for DeepSeek R1 models, improving function calling performance and stability. Let’s dive into the details and see how you can leverage this powerful feature.

Basic Example: Function Calling with DeepSeek R1

Here’s a quick example demonstrating function calling in action:

import {getLlama, LlamaChatSession, defineChatSessionFunction, resolveModelFile} from "node-llama-cpp";


const modelPath = await resolveModelFile("hf:mradermacher/DeepSeek-R1-Distill-Qwen-7B-GGUF:Q4_K_M");

const llama = await getLlama();
const model = await llama.loadModel({modelPath});
const context = await model.createContext();
const session = new LlamaChatSession({
    contextSequence: context.getSequence()
});

const functions = {
    getTime: defineChatSessionFunction({
        description: "Get the current time",
        handler() {
            return new Date().toLocaleTimeString("en-US");
        }
    })
};


const q1 = "What's the time?";
console.log("User: " + q1);

const a1 = await session.prompt(q1, {functions});
console.log("AI: " + a1.trim());

Looking for the best models to try out? Here are some top picks:

Model Size URI
DeepSeek R1 Distill Qwen 7B 4.68GB hf:mradermacher/DeepSeek-R1-Distill-Qwen-7B-GGUF:Q4_K_M
DeepSeek R1 Distill Qwen 14B 8.99GB hf:mradermacher/DeepSeek-R1-Distill-Qwen-14B-GGUF:Q4_K_M
DeepSeek R1 Distill Qwen 32B 19.9GB hf:mradermacher/DeepSeek-R1-Distill-Qwen-32B-GGUF:Q4_K_M

Pro Tip: The 7B model works great for the first prompt, but tends to degrade in subsequent queries. For better performance across multiple prompts, consider using a larger model.

Usage TIP

Before downloading, estimate your machine’s compatibility with the model using:

npx -y node-llama-cpp inspect estimate 

Try It with the CLI

You can also try function calling directly from the command line using the chat command with the --ef flag:

npx -y node-llama-cpp chat --ef --prompt "What is the time?" 

What do you think? Is this useful? What are you going to use it for?

Let me know in the comments 🙂

Total
0
Shares
Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Post
quality-of-communication-enhances-productivity-and-supports-success

Quality of Communication Enhances Productivity and Supports Success

Next Post
removability-and-repairability-of-wpc-door-frames

Removability And Repairability Of WPC Door Frames

Related Posts