Published: May 20, 2025
The built-in Prompt API is available for Chrome Extensions on Windows, macOS, and Linux from Chrome 138 stable. The API will soon be available in an origin trial in Chrome.
The API isn't supported by other browsers, ChromeOS, or mobile operating systems (such as Android or iOS). Even when the browser supports this API, it may be unavailable to run due to unmet hardware requirements.
To meet users needs, whatever platform or hardware they use, you can set up a fallback to the cloud with Firebase AI Logic.
Build a hybrid AI experience
Built-in AI comes with a number of benefits, most notably:
- Local processing of sensitive data: If you work with sensitive data, you can offer AI features to users with end-to-end encryption.
- Offline AI usage: Your users can access AI features, even when they're offline or have lapsed connectivity
While these benefits don't apply to cloud applications, you can ensure a seamless experience for those who cannot access built-in AI.
Get started with Firebase
First, create a Firebase project and register your web app. Continue your setup of the Firebase JavaScript SDK with the Firebase documentation.
Install the SDK
This workflow uses npm and requires module bundlers or JavaScript framework tooling. Firebase AI Logic is optimized to work with module bundlers to eliminate unused code (tree-shaking) and decrease SDK size.
npm install firebase@eap-ai-hybridinference
Use Firebase AI Logic
Once Firebase is installed, you initialize the SDK to start using Firebase services.
Configure and initialize your Firebase App
A Firebase project can have multiple Firebase Apps. A Firebase App is a container-like object that stores common configuration and shares authentication across Firebase services.
Your Firebase App serves as the cloud portion of your hybrid AI feature.
import { initializeApp } from 'firebase/app';
import { getAI, getGenerativeModel } from 'firebase/vertexai';
// TODO: Replace the following with your app's Firebase project configuration.
const firebaseConfig = {
apiKey: '',
authDomain: '',
projectId: '',
storageBucket: '',
messagingSenderId: '',
appId: '',
};
// Initialize `FirebaseApp`.
const firebaseApp = initializeApp(firebaseConfig);
Prompt the model
Once initialized, you can prompt the model with text or multimodal input.
Text prompts
You can use plain text for your instructions to the model. For example, you could ask the model to tell you a joke.
To ensure that built-in AI is used when available in the getGenerativeModel
function, set mode
to prefer_on_device
.
// Initialize the Google AI service.
const googleAI = getAI(firebaseApp);
// Create a `GenerativeModel` instance with a model that supports your use case.
const model = getGenerativeModel(googleAI, { mode: 'prefer_on_device' });
const prompt = 'Tell me a joke';
const result = await model.generateContentStream(prompt);
for await (const chunk of result.stream) {
const chunkText = chunk.text();
console.log(chunkText);
}
console.log('Complete response', await result.response);
Multimodal prompts
You can also prompt with image or audio, in addition to text. You could tell the model to describe an image's contents or transcribe an audio file.
Images need to be passed as a base64-encoded string as a Firebase FileDataPart
object, which you can do with the helper function fileToGenerativePart()
.
// Converts a File object to a `FileDataPart` object.
// https://firebase.google.com/docs/reference/js/vertexai.filedatapart
async function fileToGenerativePart(file) {
const base64EncodedDataPromise = new Promise((resolve) => {
const reader = new FileReader();
reader.onload = () => resolve(reader.result.split(',')[1]);
reader.readAsDataURL(file);
});
return {
inlineData: { data: await base64EncodedDataPromise, mimeType: file.type },
};
}
const fileInputEl = document.querySelector('input[type=file]');
fileInputEl.addEventListener('change', async () => {
const prompt = 'Describe the contents of this image.';
const imagePart = await fileToGenerativePart(fileInputEl.files[0]);
// To generate text output, call generateContent with the text and image
const result = await model.generateContentStream([prompt, imagePart]);
for await (const chunk of result.stream) {
const chunkText = chunk.text();
console.log(chunkText);
}
console.log(Complete response: ', await result.response);
});
Demo
Visit the Firebase AI Logic demo on different devices and browsers. You can see how the model response comes from either the built-in AI model or the cloud.
When on supported hardware in Chrome, the demo uses the Prompt API and Gemini Nano. There are only 3 requests made for the main document, the JavaScript file, and the CSS file.
When in another browser or an operating system without built-in AI support,
there is an additional request made to the Firebase endpoint,
https://firebasevertexai.googleapis.com
.
Participate and share feedback
Firebase AI Logic can be a great option to integrate AI capabilities to your web apps. By providing a fallback to the cloud when the Prompt API is unavailable, the SDK ensures wider accessibility and reliability of AI features.
Remember that cloud applications create new expectations for privacy and functionality, so it's important to inform your users of where their data is being processed.
- For feedback on Chrome's implementation, file a bug report or a feature request.
- For feedback on Firebase AI Logic, file a bug report.