Skip to main content

Helicone

This page covers how to use the Helicone within LangChain.

What is Helicone?

Helicone is an open source observability platform that proxies your OpenAI traffic and provides you key insights into your spend, latency and usage.

Helicone

Quick start

With your LangChain environment you can just add the following parameter.

const model = new OpenAI(
{},
{
basePath: "https://oai.hconeai.com/v1",
}
);
const res = await model.invoke("What is a helicone?");

Now head over to helicone.ai to create your account, and add your OpenAI API key within our dashboard to view your logs.

Helicone

How to enable Helicone caching

const model = new OpenAI(
{},
{
basePath: "https://oai.hconeai.com/v1",
baseOptions: {
headers: {
"Helicone-Cache-Enabled": "true",
},
},
}
);
const res = await model.invoke("What is a helicone?");

Helicone caching docs

How to use Helicone custom properties

const model = new OpenAI(
{},
{
basePath: "https://oai.hconeai.com/v1",
baseOptions: {
headers: {
"Helicone-Property-Session": "24",
"Helicone-Property-Conversation": "support_issue_2",
"Helicone-Property-App": "mobile",
},
},
}
);
const res = await model.invoke("What is a helicone?");

Helicone property docs


Help us out by providing feedback on this documentation page: