Unlocking AI Capabilities in Chrome and Edge
The recent advancements in local AI models have made them smaller and more efficient, bringing them closer in performance to their cloud-based counterparts. As a result, users can now execute various AI tasks directly within their browsers without the need for an internet connection or high-end hardware.
Until now, setting up local AI infrastructure involved complex third-party applications, which required additional setup and maintenance. However, Google Chrome and Microsoft Edge have introduced a set of experimental APIs that empower users to utilize AI functionalities seamlessly within their browsers.
The Power of Local AI APIs
With these new APIs, users can perform functions such as summarizing documents, translating text, and generating text based on prompts. These tasks are accomplished using models that are downloaded and executed locally on the user's device. For Chrome, the AI model used is Gemini Nano, while Microsoft Edge employs the Phi-4-mini model. While both browsers share a common codebase through the Chromium project, the functionality and performance may vary.
Available AI APIs in Chrome and Edge
As of April 2026, the following AI APIs are available in Google Chrome:
- Translator API: This API enables users to translate text between different languages.
- Language Detector API: Users can identify the language of a given text input.
- Summarizer API: This API condenses larger texts into concise summaries, headlines, or bullet points.
All three APIs are currently available to Chrome users, while Edge users have access to the Translator and Summarizer APIs, with the Language Detector API expected to be supported in the future.
Additionally, several experimental APIs are available on an opt-in basis in both browsers:
- Writer API: Generates text based on a provided prompt.
- Rewriter API: Rewrites existing text according to new instructions.
- Prompt API: Allows natural language requests directly to the model.
- Proofreader API: Checks for spelling and grammatical errors and provides suggestions for corrections.
How to Utilize the Summarizer API
To illustrate the use of these APIs, we will focus on the Summarizer API, which is available on both Chrome and Edge. To begin, create a simple HTML page hosted on a local web server. For users with Python installed, this can be done by creating an index.html file, navigating to the directory in the terminal, and executing py -m http.server on port 8080.
Here is an example of HTML code for the page:
<div style="display: flex;"><textarea style="width:50%; height:24em" id="input" placeholder="Type text to be summarized"></textarea><br><textarea style="width:50%; height:24em" id="output" placeholder="Summarization results"></textarea><br></div><textarea style="width:100%; height:4em" id="context" placeholder="Additional context"></textarea><label for="type">Type of summarization:</label><select id="type" name="type"><option value="teaser">Teaser</option><option value="tldr">tl;dr</option><option value="headline">Headline</option><option value="key-points">Key points</option></select><label for="length">Length:</label><select id="length" name="length"><option value="short">Short</option><option value="medium">Medium</option><option value="long">Long</option></select><button type="button" onclick="go();">Start</button><div style="background-color:beige" id="log"></div><script>...
Most of the critical operations are handled in the summarize() function. First, it verifies the availability of the Summarizer API, then creates a Summarizer object, and finally streams the output back to the user.
Step 1: Check API Availability
The function checks if the Summarizer API is available and retrieves its status, which could be either 'downloadable' or 'available'.
Step 2: Create the Summarizer Object
This step involves defining parameters such as the context for summarization, the type of summary required, and the desired length of the output.
Step 3: Stream Output
The output is streamed token by token, providing real-time feedback to the user as the summarization process unfolds.
Considerations When Using Local AI APIs
Users should be aware that the initial download of the models can take time, and providing progress feedback is essential. Once downloaded, there is currently no programmatic interface for managing these models, though Chrome does offer a local URL for inspecting model statistics.
Also, there may be delays in the first token appearing during inference, necessitating user communication to indicate that processing has begun.
As the landscape of local AI APIs evolves, the potential for more standardized approaches may emerge, but users can already leverage these capabilities effectively in their browsers.
Source: InfoWorld News