Published: May 20, 2025, Last updated: July 21, 2025
Explainer | Web | Extensions | Chrome Status | Intent |
---|---|---|---|---|
GitHub | View | Intent to Experiment |
With the Prompt API, you can send natural language requests to Gemini Nano in the browser.
There are many ways you can use the Prompt API in a web application or website. For example, you could build:
- AI-powered search: Answer questions based on the content of a web page.
- Personalized news feeds: Build a feed that dynamically classifies articles with categories and allow for users to filter for that content.
These are just a few possibilities, and we're excited to see what you create.
Review the hardware requirements
The following requirements exist for developers and the users who operate features using these APIs in Chrome. Other browsers may have different operating requirements.
The Language Detector and Translator APIs work in Chrome on desktop. These APIs do not work on mobile devices. The Prompt API, Summarizer API, Writer API, and Rewriter API work in Chrome when the following conditions are met:
- Operating system: Windows 10 or 11; macOS 13+ (Ventura and onwards); or Linux. Chrome for Android, iOS, and ChromeOS are not yet supported by the APIs which use Gemini Nano.
- Storage: At least 22 GB of free space on the volume that contains your Chrome profile.
- GPU: Strictly more than 4 GB of VRAM.
- Network: Unlimited data or an unmetered connection.
Gemini Nano's exact size may vary as the browser updates the model. To determine the current size, visit chrome://on-device-internals
and go to
Model status. Open the listed File path to determine the model size.
Use the Prompt API
Before you use this API, acknowledge Google's Generative AI Prohibited Uses Policy.
There are two functions available to you in the LanguageModel
namespace:
availability()
to check what the model is capable of and if it's available.create()
to start a language model session.
Model download
The Prompt API uses the Gemini Nano model in Chrome. While the API is built into Chrome, the model is downloaded separately the first time an origin uses the API.
To determine if the model is ready to use, call the asynchronous
LanguageModel.availability()
function. This should return one of the following
responses:
"unavailable"
means that the implementation does not support the requested options, or does not support prompting a language model at all."downloadable"
means that the implementation supports the requested options, but it will have to download something (for example, the language model itself, or a fine-tuning) before it can create a session using those options."downloading"
means that the implementation supports the requested options, but will need to finish an ongoing download operation before it can create a session using those options."available"
means that the implementation supports the requested options without requiring any new downloads.
To trigger the model download and create the language model session, call the
asynchronous LanguageModel.create()
function. If the response to
availability()
was 'downloadable'
, it's best practice to listen for download
progress. This way, you can inform the user in case the download takes time.
const session = await LanguageModel.create({
monitor(m) {
m.addEventListener('downloadprogress', (e) => {
console.log(`Downloaded ${e.loaded * 100}%`);
});
},
});
Model capabilities
The params()
function informs you of the language model's parameters. The
object has the following fields:
defaultTopK
: The default top-K value (default:3
).maxTopK
: The maximum top-K value (8
).defaultTemperature
: The default temperature (1.0
). The temperature value must be between0.0
and2.0
.maxTemperature
: The maximum temperature.
await LanguageModel.params();
// {defaultTopK: 3, maxTopK: 8, defaultTemperature: 1, maxTemperature: 2}
Create a session
Once the Prompt API can run, you create a session with the create()
function.
You can prompt the model with either the prompt()
or the promptStreaming()
functions.
Customize your session
Each session can be customized with topK
and temperature
using an optional
options object. The default values for these parameters are returned from
LanguageModel.params()
.
const params = await LanguageModel.params();
// Initializing a new session must either specify both `topK` and
// `temperature` or neither of them.
const slightlyHighTemperatureSession = await LanguageModel.create({
temperature: Math.max(params.defaultTemperature * 1.2, 2.0),
topK: params.defaultTopK,
});
The create()
function's optional options object also takes a signal
field,
which lets you pass an AbortSignal
to destroy the session.
const controller = new AbortController();
stopButton.onclick = () => controller.abort();
const session = await LanguageModel.create({
signal: controller.signal,
});
Initial prompts
With initial prompts, you can provide the language model with context about previous interactions, for example, to allow the user to resume a stored session after a browser restart.
const session = await LanguageModel.create({
initialPrompts: [
{ role: 'system', content: 'You are a helpful and friendly assistant.' },
{ role: 'user', content: 'What is the capital of Italy?' },
{ role: 'assistant', content: 'The capital of Italy is Rome.' },
{ role: 'user', content: 'What language is spoken there?' },
{
role: 'assistant',
content: 'The official language of Italy is Italian. [...]',
},
],
});
Constrain responses by providing a prefix
You can add a new "assistant"
role, in addition to previous roles, to elaborate
on the model's previous responses. For example:
const followup = await session.prompt([
{
role: "user",
content: "I'm nervous about my presentation tomorrow"
},
{
role: "assistant"
content: "Presentations are tough!"
}
]);
In some cases, instead of requesting a new response, you may want to
prefill part of the "assistant"
-role response message. This can be helpful to
guide the language model to use a specific response format. To do this, add
prefix: true
to the trailing "assistant"
-role message. For example:
const characterSheet = await session.prompt([
{
role: 'user',
content: 'Create a TOML character sheet for a gnome barbarian',
},
{
role: 'assistant',
content: '```toml\n',
prefix: true,
},
]);
Append messages without prompting
Inference may take some time, especially when prompting with multimodal inputs. It can be useful to send predetermined prompts in advance to populate the session, so the model can get a head start on processing.
While initialPrompts
are useful at session creation, the append()
method can be
used in addition to the prompt()
or promptStreaming()
methods, to give additional
additional contextual prompts after the session is created.
For example:
const session = await LanguageModel.create({
initialPrompts: [
{
role: 'system',
content:
'You are a skilled analyst who correlates patterns across multiple images.',
},
],
expectedInputs: [{ type: 'image' }],
});
fileUpload.onchange = async () => {
await session.append([
{
role: 'user',
content: [
{
type: 'text',
value: `Here's one image. Notes: ${fileNotesInput.value}`,
},
{ type: 'image', value: fileUpload.files[0] },
],
},
]);
};
analyzeButton.onclick = async (e) => {
analysisResult.textContent = await session.prompt(userQuestionInput.value);
};
The promise returned by append()
fulfills once the prompt has been validated,
processed, and appended to the session. The promise is rejected if the prompt
cannot be appended.
Session limits
A given language model session has a maximum number of tokens it can process. You can check usage and progress toward that limit by using the following properties on the session object:
console.log(`${session.inputUsage}/${session.inputQuota}`);
Session persistence
Each session keeps track of the context of the conversation. Previous interactions are taken into account for future interactions until the session's context window is full.
const session = await LanguageModel.create({
initialPrompts: [
{
role: 'system',
content:
'You are a friendly, helpful assistant specialized in clothing choices.',
},
],
});
const result1 = await session.prompt(
'What should I wear today? It is sunny. I am unsure between a t-shirt and a polo.',
);
console.log(result1);
const result2 = await session.prompt(
'That sounds great, but oh no, it is actually going to rain! New advice?',
);
console.log(result2);
Pass a JSON Schema
To make sure the model respects a requested JSON Schema, you need to pass the JSON Schema as an argument to the prompt()
or the promptStreaming()
methods' options object as the value of a responseConstraint
field.
Here's a very basic JSON Schema example that makes sure the model responds with
either true
or false
to classify if a given message, such as this Mastodon post,
is about pottery.
const session = await LanguageModel.create();
const schema = {
"type": "boolean"
};
const post = "Mugs and ramen bowls, both a bit smaller than intended- but that's
how it goes with reclaim. Glaze crawled the first time around, but pretty happy
with it after refiring.";
const result = await session.prompt(
`Is this post about pottery?\n\n${post}`,
{
responseConstraint: schema,
}
);
console.log(JSON.parse(result));
// true
By default, the implementation may include the schema or regular expression as
part of the message sent to the underlying language model. This uses some of the
input quota. You can measure how much of the input quota it
will use up by passing the responseConstraint
option to
session.measureInputUsage()
.
To avoid this behavior, use the omitResponseConstraintInput
option. In such
cases, we recommend that you include some guidance in the prompt:
const result = await session.prompt(`
Summarize this feedback into a rating between 0-5, only outputting a JSON
object { rating }, with a single property whose value is a number:
The food was delicious, service was excellent, will recommend.
`, { responseConstraint: schema, omitResponseConstraintInput: true });
Clone a session
To preserve resources, you can clone an existing session with the clone()
function. The conversation context is reset, but the initial prompt remains
intact. The clone()
function takes an optional options object with a signal
field, which lets you pass an AbortSignal
to destroy the cloned session.
const controller = new AbortController();
stopButton.onclick = () => controller.abort();
const clonedSession = await session.clone({
signal: controller.signal,
});
Prompt the model
You can prompt the model with either the prompt()
or the promptStreaming()
functions.
Non-streamed output
If you expect a short result, you can use the prompt()
function that returns
the response once it's available.
// Start by checking if it's possible to create a session based on the
// availability of the model, and the characteristics of the device.
const { defaultTemperature, maxTemperature, defaultTopK, maxTopK } =
await LanguageModel.params();
const available = await LanguageModel.availability();
if (available !== 'unavailable') {
const session = await LanguageModel.create();
// Prompt the model and wait for the whole result to come back.
const result = await session.prompt('Write me a poem!');
console.log(result);
}
Streamed output
If you expect a longer response, you should use the promptStreaming()
function
which lets you show partial results as they come in from the model. The
promptStreaming()
function returns a ReadableStream
.
const { defaultTemperature, maxTemperature, defaultTopK, maxTopK } =
await LanguageModel.params();
const available = await LanguageModel.availability();
if (available !== 'unavailable') {
const session = await LanguageModel.create();
// Prompt the model and stream the result:
const stream = session.promptStreaming('Write me an extra-long poem!');
for await (const chunk of stream) {
console.log(chunk);
}
}
Stop running a prompt
Both prompt()
and promptStreaming()
accept an optional second parameter with
a signal
field, which lets you stop running prompts.
const controller = new AbortController();
stopButton.onclick = () => controller.abort();
const result = await session.prompt('Write me a poem!', {
signal: controller.signal,
});
Terminate a session
Call destroy()
to free resources if you no longer need a session. When a
session is destroyed, it can no longer be used, and any ongoing execution is
aborted. You may want to keep the session around if you intend to prompt the
model often since creating a session can take some time.
await session.prompt(
"You are a friendly, helpful assistant specialized in clothing choices."
);
session.destroy();
// The promise is rejected with an error explaining that
// the session is destroyed.
await session.prompt(
"What should I wear today? It is sunny, and I am unsure between a
t-shirt and a polo."
);
Multimodal capabilities
The Prompt API supports audio and image inputs from Chrome 138 Canary, for local experimentation. The API returns a text output.
With these capabilities, you could:
- Allow users to transcribe audio messages sent in a chat application.
- Describe an image uploaded to your website for use in a caption or alt text.
const session = await LanguageModel.create({
// { type: "text" } is not necessary to include explicitly, unless
// you also want to include expected input languages for text.
expectedInputs: [{ type: 'audio' }, { type: 'image' }],
});
const referenceImage = await (await fetch('/reference-image.jpeg')).blob();
const userDrawnImage = document.querySelector('canvas');
const response1 = await session.prompt([
{
role: 'user',
content: [
{
type: 'text',
value:
'Give a helpful artistic critique of how well the second image matches the first:',
},
{ type: 'image', value: referenceImage },
{ type: 'image', value: userDrawnImage },
],
},
]);
console.log(response1);
const audioBlob = await captureMicrophoneInput({ seconds: 10 });
const response2 = await session.prompt([
{
role: 'user',
content: [
{ type: 'text', value: 'My response to your critique:' },
{ type: 'audio', value: audioBlob },
],
},
]);
Multimodal demos
See the Mediarecorder Audio Prompt demo for using the Prompt API with audio input and the Canvas Image Prompt demo for using the Prompt API with image input.
Performance strategy
The Prompt API for the web is still being developed. While we build this API, refer to our best practices on session management for optimal performance.
Feedback
Your feedback helps inform the future of this API and improvements to Gemini Nano. It may even result in dedicated task APIs (such as APIs for audio transcription or image description), so we can meet your needs and the needs of your users.
Participate and share feedback
Your input can directly impact how we build and implement future versions of this API and all built-in AI APIs.
- Join the early preview program.
- For feedback on Chrome's implementation, file a bug report or a feature request.
- Share your feedback on the API shape by commenting on an existing Issue or by opening a new one in the Prompt API GitHub repository.
- Participate in the standards effort by joining the Web Incubator Community Group.