0% found this document useful (0 votes)
14 views9 pages

OpenAI TypeScript and JavaScript API Library

The OpenAI TypeScript and JavaScript API Library provides easy access to the OpenAI REST API, supporting both TypeScript and JavaScript environments. Users can install the library via npm or JSR, and it includes features for chat completions, streaming responses, error handling, and file uploads. The library is designed for compatibility with various runtimes and includes TypeScript definitions for request and response types, along with extensive documentation for usage and advanced features.

Uploaded by

officialvasquez
Copyright
© © All Rights Reserved
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
Download as txt, pdf, or txt
0% found this document useful (0 votes)
14 views9 pages

OpenAI TypeScript and JavaScript API Library

The OpenAI TypeScript and JavaScript API Library provides easy access to the OpenAI REST API, supporting both TypeScript and JavaScript environments. Users can install the library via npm or JSR, and it includes features for chat completions, streaming responses, error handling, and file uploads. The library is designed for compatibility with various runtimes and includes TypeScript definitions for request and response types, along with extensive documentation for usage and advanced features.

Uploaded by

officialvasquez
Copyright
© © All Rights Reserved
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
Download as txt, pdf, or txt
Download as txt, pdf, or txt
You are on page 1/ 9

OpenAI TypeScript and JavaScript API Library

NPM version npm bundle size JSR Version

This library provides convenient access to the OpenAI REST API from TypeScript or
JavaScript.

It is generated from our OpenAPI specification with Stainless.

To learn how to use the OpenAI API, check out our API Reference and Documentation.

Installation
npm install openai
Installation from JSR
deno add jsr:@openai/openai
npx jsr add @openai/openai
These commands will make the module importable from the @openai/openai scope:

You can also import directly from JSR without an install step if you're using the
Deno JavaScript runtime:

import OpenAI from 'jsr:@openai/openai';


Usage
The full API of this library can be found in api.md file along with many code
examples. The code below shows how to get started using the chat completions API.

import OpenAI from 'openai';

const client = new OpenAI({


apiKey: process.env['OPENAI_API_KEY'], // This is the default and can be omitted
});

async function main() {


const chatCompletion = await client.chat.completions.create({
messages: [{ role: 'user', content: 'Say this is a test' }],
model: 'gpt-4o',
});
}

main();
Streaming responses
We provide support for streaming responses using Server Sent Events (SSE).

import OpenAI from 'openai';

const client = new OpenAI();

async function main() {


const stream = await client.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Say this is a test' }],
stream: true,
});
for await (const chunk of stream) {
process.stdout.write(chunk.choices[0]?.delta?.content || '');
}
}

main();
If you need to cancel a stream, you can break from the loop or call
stream.controller.abort().

Chat Completion streaming helpers


This library also provides several conveniences for streaming chat completions, for
example:

import OpenAI from 'openai';

const openai = new OpenAI();

async function main() {


const stream = await openai.beta.chat.completions.stream({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Say this is a test' }],
stream: true,
});

stream.on('content', (delta, snapshot) => {


process.stdout.write(delta);
});

// or, equivalently:
for await (const chunk of stream) {
process.stdout.write(chunk.choices[0]?.delta?.content || '');
}

const chatCompletion = await stream.finalChatCompletion();


console.log(chatCompletion); // {id: "…", choices: […], …}
}

main();
See helpers.md for more details.

Request & Response types


This library includes TypeScript definitions for all request params and response
fields. You may import and use them like so:

import OpenAI from 'openai';

const client = new OpenAI({


apiKey: process.env['OPENAI_API_KEY'], // This is the default and can be omitted
});

async function main() {


const params: OpenAI.Chat.ChatCompletionCreateParams = {
messages: [{ role: 'user', content: 'Say this is a test' }],
model: 'gpt-4o',
};
const chatCompletion: OpenAI.Chat.ChatCompletion = await
client.chat.completions.create(params);
}

main();
Documentation for each method, request param, and response field are available in
docstrings and will appear on hover in most modern editors.

File uploads
Request parameters that correspond to file uploads can be passed in many different
forms:
File (or an object with the same structure)
a fetch Response (or an object with the same structure)
an fs.ReadStream
the return value of our toFile helper
import fs from 'fs';
import fetch from 'node-fetch';
import OpenAI, { toFile } from 'openai';

const client = new OpenAI();

// If you have access to Node `fs` we recommend using `fs.createReadStream()`:


await client.files.create({ file: fs.createReadStream('input.jsonl'), purpose:
'fine-tune' });

// Or if you have the web `File` API you can pass a `File` instance:
await client.files.create({ file: new File(['my bytes'], 'input.jsonl'), purpose:
'fine-tune' });

// You can also pass a `fetch` `Response`:


await client.files.create({ file: await fetch('https://somesite/input.jsonl'),
purpose: 'fine-tune' });

// Finally, if none of the above are convenient, you can use our `toFile` helper:
await client.files.create({
file: await toFile(Buffer.from('my bytes'), 'input.jsonl'),
purpose: 'fine-tune',
});
await client.files.create({
file: await toFile(new Uint8Array([0, 1, 2]), 'input.jsonl'),
purpose: 'fine-tune',
});
Handling errors
When the library is unable to connect to the API, or if the API returns a non-
success status code (i.e., 4xx or 5xx response), a subclass of APIError will be
thrown:

async function main() {


const job = await client.fineTuning.jobs
.create({ model: 'gpt-4o', training_file: 'file-abc123' })
.catch(async (err) => {
if (err instanceof OpenAI.APIError) {
console.log(err.request_id);
console.log(err.status); // 400
console.log(err.name); // BadRequestError
console.log(err.headers); // {server: 'nginx', ...}
} else {
throw err;
}
});
}

main();
Error codes are as followed:

Status Code Error Type


400 BadRequestError
401 AuthenticationError
403 PermissionDeniedError
404 NotFoundError
422 UnprocessableEntityError
429 RateLimitError
>=500 InternalServerError
N/A APIConnectionError
Retries
Certain errors will be automatically retried 2 times by default, with a short
exponential backoff. Connection errors (for example, due to a network connectivity
problem), 408 Request Timeout, 409 Conflict, 429 Rate Limit, and >=500 Internal
errors will all be retried by default.

You can use the maxRetries option to configure or disable this:

// Configure the default for all requests:


const client = new OpenAI({
maxRetries: 0, // default is 2
});

// Or, configure per-request:


await client.chat.completions.create({ messages: [{ role: 'user', content: 'How can
I get the name of the current day in JavaScript?' }], model: 'gpt-4o' }, {
maxRetries: 5,
});
Timeouts
Requests time out after 10 minutes by default. You can configure this with a
timeout option:

// Configure the default for all requests:


const client = new OpenAI({
timeout: 20 * 1000, // 20 seconds (default is 10 minutes)
});

// Override per-request:
await client.chat.completions.create({ messages: [{ role: 'user', content: 'How can
I list all files in a directory using Python?' }], model: 'gpt-4o' }, {
timeout: 5 * 1000,
});
On timeout, an APIConnectionTimeoutError is thrown.

Note that requests which time out will be retried twice by default.

Request IDs
For more information on debugging requests, see these docs

All object responses in the SDK provide a _request_id property which is added from
the x-request-id response header so that you can quickly log failing requests and
report them back to OpenAI.

const completion = await client.chat.completions.create({ messages: [{ role:


'user', content: 'Say this is a test' }], model: 'gpt-4o' });
console.log(completion._request_id) // req_123
You can also access the Request ID using the .withResponse() method:

const { data: stream, request_id } = await openai.chat.completions


.create({
model: 'gpt-4',
messages: [{ role: 'user', content: 'Say this is a test' }],
stream: true,
})
.withResponse();
Auto-pagination
List methods in the OpenAI API are paginated. You can use the for await … of syntax
to iterate through items across all pages:

async function fetchAllFineTuningJobs(params) {


const allFineTuningJobs = [];
// Automatically fetches more pages as needed.
for await (const fineTuningJob of client.fineTuning.jobs.list({ limit: 20 })) {
allFineTuningJobs.push(fineTuningJob);
}
return allFineTuningJobs;
}
Alternatively, you can request a single page at a time:

let page = await client.fineTuning.jobs.list({ limit: 20 });


for (const fineTuningJob of page.data) {
console.log(fineTuningJob);
}

// Convenience methods are provided for manually paginating:


while (page.hasNextPage()) {
page = await page.getNextPage();
// ...
}
Realtime API Beta
The Realtime API enables you to build low-latency, multi-modal conversational
experiences. It currently supports text and audio as both input and output, as well
as function calling through a WebSocket connection.

import { OpenAIRealtimeWebSocket } from 'openai/beta/realtime/websocket';

const rt = new OpenAIRealtimeWebSocket({ model: 'gpt-4o-realtime-preview-2024-12-


17' });

rt.on('response.text.delta', (event) => process.stdout.write(event.delta));


For more information see realtime.md.

Microsoft Azure OpenAI


To use this library with Azure OpenAI, use the AzureOpenAI class instead of the
OpenAI class.

[!IMPORTANT] The Azure API shape slightly differs from the core API shape which
means that the static types for responses / params won't always be correct.

import { AzureOpenAI } from 'openai';


import { getBearerTokenProvider, DefaultAzureCredential } from '@azure/identity';

const credential = new DefaultAzureCredential();


const scope = 'https://cognitiveservices.azure.com/.default';
const azureADTokenProvider = getBearerTokenProvider(credential, scope);

const openai = new AzureOpenAI({ azureADTokenProvider, apiVersion: "<The API


version, e.g. 2024-10-01-preview>" });

const result = await openai.chat.completions.create({


model: 'gpt-4o',
messages: [{ role: 'user', content: 'Say hello!' }],
});
console.log(result.choices[0]!.message?.content);
For more information on support for the Azure API, see azure.md.

Automated function calls


We provide the openai.beta.chat.completions.runTools({…}) convenience helper for
using function tool calls with the /chat/completions endpoint which automatically
call the JavaScript functions you provide and sends their results back to the
/chat/completions endpoint, looping as long as the model requests tool calls.

For more information see helpers.md.

Advanced Usage
Accessing raw Response data (e.g., headers)
The "raw" Response returned by fetch() can be accessed through the .asResponse()
method on the APIPromise type that all methods return.

You can also use the .withResponse() method to get the raw Response along with the
parsed data.

const client = new OpenAI();

const response = await client.chat.completions


.create({ messages: [{ role: 'user', content: 'Say this is a test' }], model:
'gpt-4o' })
.asResponse();
console.log(response.headers.get('X-My-Header'));
console.log(response.statusText); // access the underlying Response object

const { data: chatCompletion, response: raw } = await client.chat.completions


.create({ messages: [{ role: 'user', content: 'Say this is a test' }], model:
'gpt-4o' })
.withResponse();
console.log(raw.headers.get('X-My-Header'));
console.log(chatCompletion);
Making custom/undocumented requests
This library is typed for convenient access to the documented API. If you need to
access undocumented endpoints, params, or response properties, the library can
still be used.

Undocumented endpoints
To make requests to undocumented endpoints, you can use client.get, client.post,
and other HTTP verbs. Options on the client, such as retries, will be respected
when making these requests.

await client.post('/some/path', {
body: { some_prop: 'foo' },
query: { some_query_arg: 'bar' },
});
Undocumented request params
To make requests using undocumented parameters, you may use // @ts-expect-error on
the undocumented parameter. This library doesn't validate at runtime that the
request matches the type, so any extra values you send will be sent as-is.

client.foo.create({
foo: 'my_param',
bar: 12,
// @ts-expect-error baz is not yet public
baz: 'undocumented option',
});
For requests with the GET verb, any extra params will be in the query, all other
requests will send the extra param in the body.

If you want to explicitly send an extra argument, you can do so with the query,
body, and headers request options.

Undocumented response properties


To access undocumented response properties, you may access the response object with
// @ts-expect-error on the response object, or cast the response object to the
requisite type. Like the request params, we do not validate or strip extra
properties from the response from the API.

Customizing the fetch client


By default, this library uses node-fetch in Node, and expects a global fetch
function in other environments.

If you would prefer to use a global, web-standards-compliant fetch function even in


a Node environment, (for example, if you are running Node with --experimental-fetch
or using NextJS which polyfills with undici), add the following import before your
first import from "OpenAI":

// Tell TypeScript and the package to use the global web fetch instead of node-
fetch.
// Note, despite the name, this does not add any polyfills, but expects them to be
provided if needed.
import 'openai/shims/web';
import OpenAI from 'openai';
To do the inverse, add import "openai/shims/node" (which does import polyfills).
This can also be useful if you are getting the wrong TypeScript types for Response
(more details).

Logging and middleware


You may also provide a custom fetch function when instantiating the client, which
can be used to inspect or alter the Request or Response before/after each request:

import { fetch } from 'undici'; // as one example


import OpenAI from 'openai';

const client = new OpenAI({


fetch: async (url: RequestInfo, init?: RequestInit): Promise<Response> => {
console.log('About to make a request', url, init);
const response = await fetch(url, init);
console.log('Got response', response);
return response;
},
});
Note that if given a DEBUG=true environment variable, this library will log all
requests and responses automatically. This is intended for debugging purposes only
and may change in the future without notice.

Configuring an HTTP(S) Agent (e.g., for proxies)


By default, this library uses a stable agent for all http/https requests to reuse
TCP connections, eliminating many TCP & TLS handshakes and shaving around 100ms off
most requests.

If you would like to disable or customize this behavior, for example to use the API
behind a proxy, you can pass an httpAgent which is used for all requests (be they
http or https), for example:
import http from 'http';
import { HttpsProxyAgent } from 'https-proxy-agent';

// Configure the default for all requests:


const client = new OpenAI({
httpAgent: new HttpsProxyAgent(process.env.PROXY_URL),
});

// Override per-request:
await client.models.list({
httpAgent: new http.Agent({ keepAlive: false }),
});
Semantic versioning
This package generally follows SemVer conventions, though certain backwards-
incompatible changes may be released as minor versions:

Changes that only affect static types, without breaking runtime behavior.
Changes to library internals which are technically public but not intended or
documented for external use. (Please open a GitHub issue to let us know if you are
relying on such internals.)
Changes that we do not expect to impact the vast majority of users in practice.
We take backwards-compatibility seriously and work hard to ensure you can rely on a
smooth upgrade experience.

We are keen for your feedback; please open an issue with questions, bugs, or
suggestions.

Requirements
TypeScript >= 4.5 is supported.

The following runtimes are supported:

Node.js 18 LTS or later (non-EOL) versions.

Deno v1.28.0 or higher.

Bun 1.0 or later.

Cloudflare Workers.

Vercel Edge Runtime.

Jest 28 or greater with the "node" environment ("jsdom" is not supported at this
time).

Nitro v2.6 or greater.

Web browsers: disabled by default to avoid exposing your secret API credentials.
Enable browser support by explicitly setting dangerouslyAllowBrowser to true'.

More explanation
Why is this dangerous?
Enabling the dangerouslyAllowBrowser option can be dangerous because it exposes
your secret API credentials in the client-side code. Web browsers are inherently
less secure than server environments, any user with access to the browser can
potentially inspect, extract, and misuse these credentials. This could lead to
unauthorized access using your credentials and potentially compromise sensitive
data or functionality.
When might this not be dangerous?
In certain scenarios where enabling browser support might not pose significant
risks:

Internal Tools: If the application is used solely within a controlled internal


environment where the users are trusted, the risk of credential exposure can be
mitigated.
Public APIs with Limited Scope: If your API has very limited scope and the exposed
credentials do not grant access to sensitive data or critical operations, the
potential impact of exposure is reduced.
Development or debugging purpose: Enabling this feature temporarily might be
acceptable, provided the credentials are short-lived, aren't also used in
production environments, or are frequently rotated.
Note that React Native is not supported at this time.

If you are interested in other runtime environments, please open or upvote an issue
on GitHub.

You might also like