Unified AI Workflow: Gemini API, CLI & GitHub Copilot Integration
Architecting the Unified AI Workflow: A Technical Deep Dive on Parallel Integration of Gemini API, Gemini CLI, and GitHub Copilot

A Unified Framework: Defining the Roles of the AI Developer Triad
Deconstructing the “Parallel” Workflow: API vs. CLI vs. IDE Assistant
The objective to leverage Gemini CLI, GitHub Copilot, and Gemini API keys “parallely” in a single project requires a precise deconstruction of their roles. These tools are frequently viewed through a competitive “versus” lens, but this perspective is counterproductive. An effective, modern AI-driven workflow depends on a synthesis of these components, understanding that they are not competitors but collaborators operating at three distinct layers of abstraction.
The developer’s goal is to orchestrate three separate but interconnected AI contexts:
- The In-Code Context: An AI partner embedded in the IDE, assisting with line-by-line code implementation (GitHub Copilot).
- The Meta-Project Context: An AI agent in the terminal, capable of analyzing, automating, and operating on the entire project or file system (Gemini CLI).
- The In-Application Context: A programmatic AI service, consumed by the application itself to provide features to end-users (Gemini API).
This report will detail the architecture and security model required to unify these three layers into a cohesive and powerful development system.
The “In-Application” Layer: The Gemini API as a Project Feature
The Gemini API is the programmatic access layer. This is not a development tool in the traditional sense, but rather a service that is integrated into the application’s core logic. Its function is to provide an application with access to Google’s state-of-the-art models, such as Gemini 1.5 Pro, to power features for end-users.
- Function: Enables programmatic calls to generate content, analyze images, or perform complex reasoning tasks within the application at runtime.
- Authentication: This layer is authenticated exclusively via API keys. These keys are generated from Google AI Studio or a billing-enabled Google Cloud Vertex AI project. This key is a production secret and must be treated as such.
- Practical Example: A developer is building a new video-sharing platform. The Gemini API is the service their backend calls to automatically generate a summary and a list of key topics for every user-uploaded video.
The “In-Code” Layer: GitHub Copilot as the Implementation Partner
GitHub Copilot is the implementation layer. It functions as an AI pair programmer directly integrated into the developer’s Integrated Development Environment (IDE), such as Visual Studio Code.
- Function: Provides inline, context-aware code completions (“ghost text”), refactoring assistance, code explanation, and debugging help. It excels at accelerating the creation of repetitive code, generating unit tests, and correcting syntax.
- Authentication: Requires a subscription to GitHub Copilot. This is typically tied to a developer’s GitHub account.
- Practical Example: In the video-sharing platform example, GitHub Copilot is the tool that writes the Python or Node.js code (e.g., the async function, the try…except block, and the HTTP request logic) that makes the call to the Gemini API.
The “Meta-Project” Layer: The Gemini CLI as the Project Automator
The Gemini CLI is the automation and analysis layer. It is an open-source AI agent that lives in the developer’s terminal. It is designed to perform tasks about the project, operating on the entire codebase, filesystem, and shell environment.
- Function: It excels at high-context “code understanding and file manipulation”. It can read the entire repository to explain its architecture, generate documentation, interact with shell commands, and be scripted for automation via its “Headless Mode“.
- Authentication: This layer features a flexible, dual-authentication model:
- Free (Interactive Use): A developer can log in with a personal Google account to receive a free Gemini Code Assist license. This provides access to powerful models like Gemini 1.5 Pro, its massive 1 million token context window, and very high free-tier rate limits (e.g., 1,000 requests/day). This is ideal for local, interactive work.
- Billed (Automated Use): For non-interactive scripting, such as in a CI/CD pipeline or Git hook, the CLI can be authenticated using a Google AI Studio or Vertex AI API key for usage-based billing.
- Practical Example: After GitHub Copilot helps write the new video-processing endpoint, the developer runs
gemini -p "Generate markdown documentation for this new /process-video endpoint based on the code in src/api/."in the terminal. The Gemini CLI reads the files and produces the new documentation.
Table 1: The AI Developer Triad: Role & Scope Matrix
The following table provides a clear delineation of these three components, which will serve as the foundational mental model for the workflows in this report.
| Metric | Gemini API (with Keys) | GitHub Copilot | Gemini CLI |
|---|---|---|---|
| Primary Interface | SDKs (Python, Node.js) & REST | IDE Extension (e.g., VS Code) | Command Line Interface (Terminal) |
| Core Function | Programmatic AI Access | Inline AI Assistant (Pair Programmer) | Interactive AI Agent |
| Scope of Operation | Application Runtime: Powers features inside your deployed app. | In-File Context: Operates on the code file(s) you have open. | Project/System Context: Operates on the entire codebase, file system, and shell. |
| Typical Use Case | summarize_text(user_input) |
“Write this function for me” | gemini > "Explain this repo" |
| Authentication | Google AI Studio / Vertex AI API Key | GitHub Copilot Subscription | |
| Key Strength | Production-grade model access | Inline code-writing speed | High-context codebase analysis & automation |
This three-layer model (Runtime, In-File, Project) is the key to unlocking the “parallel” workflow. However, it immediately presents a significant architectural consideration: the developer is, by default, straddling two major, competing AI ecosystems. This workflow requires a Microsoft subscription for the In-Code layer (Copilot) and a Google Cloud project for the In-Application and Meta-Project layers (API/CLI). This multi-cloud dependency creates immediate friction in billing (two separate bills) and security—a challenge this report’s architecture is designed to solve.
Workflow 1: Architecting the “In-Application” Layer (Gemini API & Copilot)
Practical Guide: Using GitHub Copilot to Write Gemini API-Driven Code
The most direct and powerful synergy in this framework is using the “In-Code” assistant (GitHub Copilot) to build the “In-Application” feature (powered by the Gemini API). This workflow involves a developer in their IDE (e.g., VS Code) with the Copilot extension active, writing code for an application that calls the Gemini API.
This scenario directly clarifies a common point of confusion, such as that seen in, where a user has a Microsoft-based subscription but is separately billed for Google Cloud API usage. This workflow demonstrates precisely why: the Copilot subscription pays for the assistant, while the API key pays for the application’s runtime calls. They are two distinct, complementary services.
Example: Building a Python Application
- Step 1 (Copilot for Security): The developer first establishes a security-first posture. In
main.py, they type a natural language comment:# create a function to read the 'GEMINI_API_KEY' from environment variablesGitHub Copilot will immediately suggest the correct Python code, importing theosmodule and usingos.environ.get('GEMINI_API_KEY'). - Step 2 (Copilot for SDK Implementation): Next, the developer prompts for the core logic:
# create a function 'generate_summary' that takes text input, configures the google.genai client, and calls the gemini-1.5-flash model to summarizeCopilot, having been trained on public documentation and repositories like the Gemini API Cookbook and API examples, will generate the full boilerplate. This will includeimport google.generativeai as genai, configuring the clientgenai.Client(api_key=...), and forming theresponse = client.models.generate_content(...)call. - Step 3 (Copilot Chat for Refinement): The developer highlights the new, Copilot-generated function and presses
Ctrl+Ito open the inline chat.- Prompt:
@workspace /explain how this function handles API errors - Copilot provides an analysis of its own code.
- Prompt:
@workspace /fix add robust error handling for API timeouts and 429 rate limit exceptions - Copilot refactors the code, wrapping it in the necessary
try...exceptblocks.
- Prompt:
Example: Building a Node.js Service
This process is language-agnostic. To build a Node.js Express server, the developer prompts Copilot:
// create an async express post endpoint '/summarize' that reads 'text' from the body// inside the endpoint, initialize the GoogleGenerativeAI client from '@google/generative-ai' and call generateContent with the text
Copilot will generate the asynchronous endpoint, correctly handle the JSON body, and implement the Node.js SDK, demonstrating its ability to manage async/await syntax and API-level logic.
This workflow reveals a fundamental shift in the developer’s role. It creates an “AI Inception loop“:

- Developer -> (Prompt 1) -> GitHub Copilot
- GitHub Copilot -> (Code) -> Developer
- Developer -> (Runs Code)
- Code -> (API Key + Prompt 2) -> Gemini API
- Gemini API -> (Response) -> Code
Code -> (Output) -> End User
The developer’s skill is elevated from writing boilerplate to engineering prompts at two distinct levels: prompts for Copilot to write high-quality code, and prompts for the application to send to the Gemini API to get high-quality responses.
Workflow 2: Mastering the “Meta-Project” Layer (Gemini CLI)
The Parallel “Second Monitor” Workflow
This section details how to use the Gemini CLI at the same time as the workflow from Section 2. The developer has VS Code with Copilot active on one monitor, writing the main.py file. On a parallel “second monitor” (or a split terminal), they have an interactive Gemini CLI session running.
This parallel session leverages the massive 1 million token context window of the model provided with the free Gemini Code Assist license. The CLI acts as the “project-level” assistant, while Copilot remains the “file-level” assistant.
Use Case: On-Demand Codebase Analysis and Explanation
While Copilot is helping write a new function in main.py, the developer needs to understand how it will interact with a complex, existing module. Instead of breaking flow in the IDE, they turn to the terminal.
- Terminal Prompt: gemini > Explain the architecture of this codebase, focusing on the ‘@server.js’ module and how it handles routing.
This prompt, based on examples from and, uses the CLI’s file-reading capabilities to perform high-context “code understanding”. It provides a high-level architectural overview, a task that is difficult for a file-focused assistant. This demonstrates the CLI’s strength in “deep research”.
Use Case: Generating Documentation and Automating Git
The developer has just used Copilot to finish the new feature. Now, they must document it and commit the work.
- Terminal Prompt: gemini > Generate a comprehensive README.md file for this project, including setup instructions for the new ‘@server.js’ endpoint.
As demonstrated in the “Hands-on with Gemini CLI” codelab, the CLI will generate the file. More importantly, it may then act as an agent:
- CLI Response: I have created ‘README.md’. Shall I run ‘git add README.md’ and ‘git commit -m “docs: add README”‘?
This demonstrates the CLI’s agentic nature, leveraging its new interactive shell support (PTY) to run commands like git or vim within its own context, rather than just printing them.
Use Case: Scripting and “Headless Mode” Automation
The most advanced “parallel” use involves running the CLI non-interactively in scripts, using its “Headless Mode”.
Example 1: A Git pre-commit Hook
A developer can automate their commit messages. By creating a script in .git/hooks/pre-commit, they can pipe the staged changes to the Gemini CLI.
#!/bin/sh#.git/hooks/pre-commit# Get staged changes and pass to Gemini CLIgit diff --staged | gemini -p "Based on the following diff, generate a concise conventional commit message and output *only the message." >.git/COMMIT_EDITMSG
When the developer runs git commit, this script runs automatically. The CLI analyzes the diff and generates the commit message, automating a core development task.
Example 2: Project Management Agent
A real-world workflow demonstrates the CLI’s full agentic power. A developer used the Gemini CLI to create and manage their dotfiles repository. They issued high-level, multi-step commands:
- gemini > create a.gitignore to only track GEMINI.md, settings.json, and commands/
- gemini > plan and execute pushing this local repo as a new private GitHub repo named ‘Dazbo-Gemini-Config’
The Gemini CLI was able to complete these tasks by composing its built-in tools (shell, file system) and its extension tools (like the GitHub MCP server).
This “local-to-remote” automation workflow is a powerful concept. A developer can prototype a complex automation script (like a code-quality analysis) locally in their terminal using the CLI’s headless mode and structured JSON output (–output-format json). Once perfected, that exact same script can be “deployed” to their CI/CD pipeline, which is fundamentally just a remote, automated script runner, by using a wrapper like the run-gemini-cli GitHub Action.
The Critical Foundation: A Security Architecture for Parallel AI Tools
The Primary Risk: Copilot’s Documented Access to .env Secrets
This entire parallel architecture rests on a critical security foundation, and failure to address it exposes the project to catastrophic risk. The primary point of failure is the intersection of the “In-Code” tool (Copilot) and the “In-Application” credential (Gemini API Key).
Direct reports from the GitHub developer community confirm this risk: “Copilot can read my local projects.env files”. One user noted, “I noticed that it also completes with values that are inside my development environment.env files… Can these kind of values eventually be visible to another developer in the future?”. This is the core problem: the assistant (Copilot) may read, process, and potentially leak the secret credentials for the API (Gemini). This is, as one developer stated, “a big no-no”.
Google & GitHub Mandates: Unbreakable Rules for API Key Handling
The official documentation from both Google and GitHub is unanimous and absolute on this point, leaving no room for interpretation.
- Google’s Mandate: The “Critical security rules” for the Gemini API are explicit: “Never commit API keys to source control.” and “Never expose API keys on the client-side.”.
- GitHub’s Mandate: GitHub’s security best practices are equally firm: “Don’t push unencrypted authentication credentials… even if the repository is private.” and “Never hardcode authentication credentials… into your code.”.
- Community Consensus: The Stack Overflow community reinforces this as the “only way to protect yourself”. The accepted best practice is to use environment variables in production and a .env file (that is never checked into git) for development.
The Secure Playbook: A Multi-Layered Credential Strategy
The following multi-layered playbook is the non-negotiable prerequisite for safely implementing the parallel workflow.
Step 1: .gitignore (The First Line of Defense)
The very first line in the project’s .gitignore file must be: .env This is the simplest and most crucial step. A .env.example file, which contains the names of the required variables but not their secret values, should be committed to the repository to guide other developers.
Step 2: Local Development (Mitigating the Copilot Risk)
The application code (written with Copilot’s help) must be designed to read the API key from an environment variable, (e.g., os.environ.get(‘GEMINI_API_KEY’)). However, this does not solve the problem of Copilot reading the .env file itself.
- Solution A (Recommended): Bypass the .env file for local development. Instead of using a dotenv library that reads a file from the project directory, set the API key as an environment variable in the shell’s profile (e.g., ~/.zshrc or ~/.bash_profile).
export GEMINI_API_KEY="your-key-here"
After restarting the terminal and IDE, the key will exist in the shell’s environment, where the application can read it. Critically, it does not exist in any file within the project directory, making it invisible to Copilot’s file-based context.
- Solution B (IDE Configuration): For teams that must use .env files, some Copilot plans (e.g., Business) and IDEs allow for explicit blacklisting of files or folders from the AI’s context. This setting must be rigorously enforced.
Step 3: CI/CD & Production (The Secure Standard)
The .env file never goes to production. The industry standard is to use encrypted secrets management. For this workflow, GitHub Actions Secrets are the standard solution.
- How-To: The GEMINI_API_KEY is stored in the repository’s Settings > Secrets and variables > Actions.
- In-Workflow: The key is securely injected into the CI/CD job’s environment at runtime.
jobs: run-my-app: steps: - name: Run Python script env: GEMINI_API_KEY: ${{ secrets.GEMINI_API_KEY }} run: python main.py
This security model is doubly important because the Gemini CLI itself has two auth modes. While the free Google Account login is fine for local interactive use, any automation (like the Git hook in Section 3 or a CI/CD job in Section 5) also requires a non-interactive API key. This means the developer must secure this credential for both their “In-Application” runtime and their “Meta-Project” automation.
Table 2: API Key Security: Mandates and Mitigations
| Risk Scenario | Anti-Pattern (The “Don’t”) | Secure Pattern (The “Do”) | Supporting Tool/Command |
|---|---|---|---|
| Storing Key for Local Dev | Hardcoding key in main.py. | Load from environment variable. | os.environ.get() |
| Sharing Code with Team | Committing .env file, even to a private repo. | Commit .env.example. Add .env to .gitignore. | .gitignore |
| Copilot Reading Local Key | Storing the key in .env and assuming Copilot ignores it. | (Best) Store key as export in your shell profile (~/.zshrc). | export GEMINI_API_KEY=… |
| Using Key in CI/CD | Pasting the key in plain text into the .yml workflow file. | Use GitHub Actions Encrypted Secrets. | ${{ secrets.GEMINI_API_KEY }} |
| Enterprise-Grade CI/CD | Using long-lived API keys (Secrets). | (Best) Use Workload Identity Federation (WIF) for credential-less auth. | Google Cloud WIF Config |
While GitHub Secrets is the standard, the true expert-level solution, as highlighted by Google’s own documentation for its GitHub Action, is to avoid long-lived keys entirely.
Google Cloud’s Workload Identity Federation (WIF) provides “secure, credential-less authentication” by exchanging a short-lived GitHub-issued token for a Google Cloud access token, eliminating the risk of a static API key leak.
Advanced Orchestration: Automating the Parallel Workflow
Beyond the Local Machine: Integrating Gemini CLI into GitHub Actions
The final step in this architecture is to take the “Meta-Project” automation, prototyped locally with the Gemini CLI, and deploy it as a remote, automated CI/CD process. This is accomplished using the google-github-actions/run-gemini-cli action.
This action is designed to install and run the Gemini CLI inside a GitHub workflow, acting as an “autonomous agent for critical routine coding tasks”. This workflow is authenticated using the secure methods from Section 4 (either GitHub Actions Secrets or, preferably, Workload Identity Federation).
Use Case: Automated Pull Request Review with run-gemini-cli
This use case represents the ultimate “parallel” synthesis, where all three components work in concert in a fully automated, “hands-off” workflow.
- The Trigger: A developer, using GitHub Copilot to assist them, writes a new feature and opens a Pull Request.
- The Event: The on: pull_request event in GitHub Actions triggers a new workflow.
- The Action: The workflow job uses the google-github-actions/run-gemini-cli action. This action is authenticated using a Gemini API Key (or WIF) stored in secrets.
- The Agent: The action executes the Gemini CLI with a high-level prompt, such as: gemini -p “Review this pull request. Check for bugs, adherence to our coding conventions, and suggest improvements.”.
- The Result: The Gemini CLI analyzes the PR’s code diff and posts a formal review comment directly on the Pull Request.
In this flow, the developer receives feedback from one AI (Gemini CLI) on code that was written with the help of another AI (Copilot).
The “Connective Tissue”: Using GEMINI.md for Context
A crucial question arises from the previous use case: How does the Gemini CLI, running in a stateless CI/CD job, know the project’s specific coding conventions?
The answer is the GEMINI.md file, the “connective tissue” of the “Meta-Project” layer. The run-gemini-cli action documentation and real-world examples confirm that a GEMINI.md file in the repository’s root can provide “project-specific instructions and context (like coding conventions or architectural patterns)”.
This file acts as a persistent, high-level system prompt for the Gemini CLI. By encoding the project’s “soul”—its preferred libraries, Python guidance, and architectural rules —into this file, the automated PR review from Section 5.2 becomes project-specific, context-aware, and immensely valuable.
The Next Frontier: Agent-to-Agent Collaboration via MCP
The parallel workflows described so far are a snapshot of today’s capabilities. The future, however, points toward deeper integration. This is enabled by the Model Context Protocol (MCP), an open protocol described as a “USB-C port for AI applications”. MCP is designed to standardize how AI agents and tools communicate.
A conceptual workflow based on this protocol involves:
- One AI tool, such as the GitHub Copilot CLI, is configured to run as an “MCP server.”
- The Gemini CLI is then configured in its settings.json to recognize the Copilot CLI as an available tool (e.g., @copilot).
- A developer could then issue a complex, delegated command in a single prompt: gemini > @copilot please refactor my last commit to follow the style guide and then push the changes.
- The Gemini CLI (the host agent) would delegate the task to the Copilot CLI (the specialist tool), enabling a “pseudo Agent-to-Agent (A2A) interaction”. This moves beyond parallel use to a truly collaborative AI ecosystem.
Ecosystem Analysis: Clarifying the Gemini-in-Copilot Partnership
Distinguishing “Using Your Gemini API Key” from “Copilot Using a Gemini Model”
A recent partnership between Google and GitHub adds a significant, and potentially confusing, layer to this ecosystem. GitHub is integrating Gemini models (starting with Gemini 1.5 Pro) directly into GitHub Copilot. Developers will soon have a “model picker” within Copilot Chat to choose the model that best suits their needs.
It is critical to differentiate this new development from the primary workflow described in this report.
- Workflow (Section 2): Developer -> Copilot (GPT model) -> writes code -> that calls -> Developer’s Google API Key (Gemini model) Billing: The developer pays Microsoft for the Copilot subscription plus Google for the API consumption.
- New Partnership : Developer -> Copilot (Gemini 1.5 Pro model) -> writes code Billing: The developer pays Microsoft for the Copilot subscription (which now includes access to the Gemini model for development assistance).
Why You Still Need Your Own API Key
This new partnership might lead a developer to ask, “If Copilot will have Gemini, why do I need my own API key?”
The answer lies in the fundamental distinction between the “In-Code” layer and the “In-Application” layer. The Gemini model integrated into Copilot is for development-time assistance—it helps the developer write, refactor, and understand code. It will not be used to power the production application.
To build features for end-users—to power the video summarizer, the image analyzer, or the customer-facing chatbot—the application will always need to make its own programmatic calls. For this, the developer will still need their own Gemini API key. This report’s “In-Application” architecture remains an essential and distinct component of the development stack.
Synthesis: Strategic Recommendations for a Unified AI Workflow
A Final “Playbook” for a New Project
The following checklist synthesizes the analyses from all previous sections into an actionable, phase-based plan for a new project.
Phase 1: Setup & Security (Section 4)
- Create a Google Cloud Project, enable the Gemini API, and generate an API key.
- Create a new GitHub Repository. Add the API key to Settings > Secrets and variables > Actions as GEMINI_API_KEY. (For enterprise projects, configure Workload Identity Federation ).
- Clone the repository. Immediately create a .gitignore file and add .env as the first line.
- For local development, add export GEMINI_API_KEY=”your-key-here” to your shell profile (e.g., ~/.zshrc) to mitigate the Copilot security risk.
- Install the Gemini CLI (e.g., npm install -g @google/gemini-cli) and authenticate with your personal Google account (gemini auth login).
- Create a GEMINI.md file in the repo root. Define your project’s coding standards, libraries, and architectural rules to provide context for the CLI.
Phase 2: Implementation (Section 2 & 3)
- IDE: Open the project in VS Code. Use GitHub Copilot to generate boilerplate, write functions, and create tests. Use Copilot Chat (Ctrl+I) to refactor and debug. Ensure all code reads the API key from the environment.
- Terminal (Parallel): In a separate, parallel terminal window, use the interactive Gemini CLI session to ask high-level architectural questions (gemini > explain…) or generate project files (gemini > create README.md).
Phase 3: Automation (Section 5)
- Create a Git pre-commit hook that uses the Gemini CLI in “headless” mode to read the git diff and auto-generate a conventional commit message.
- Add the google-github-actions/run-gemini-cli workflow to your repository. Configure it to run on: pull_request, instructing it to review the PR based on the context provided in your GEMINI.md file.
Table 3: Parallel SDLC Integration Model
This table maps the primary AI tools to the phases of the Software Development Lifecycle (SDLC), providing a clear guide on when to use which tool.
- SDLC Phase: Prototyping
Primary AI Tool: Gemini CLI
Secondary AI Tool: GitHub Copilot
Example Parallel Task: Use the CLI to generate boilerplate for a new project (gemini > Write me a Discord bot…). Then, use Copilot to flesh out the generated functions. - SDLC Phase: Implementation
Primary AI Tool: GitHub Copilot
Secondary AI Tool: Gemini API
Example Parallel Task: Use Copilot to write a Python function that calls the Gemini API’s generateContent method (as in Section 2). - SDLC Phase: Debugging
Primary AI Tool: GitHub Copilot
Secondary AI Tool: Gemini CLI
Example Parallel Task: Use Copilot Chat (@workspace /fix this bug) to fix an error. Use Gemini CLI’s interactive shell (gemini > vim…) to edit configs and rerun tests. - SDLC Phase: Documentation
Primary AI Tool: Gemini CLI
Secondary AI Tool: GitHub Copilot
Example Parallel Task: Use the CLI to generate a README.md for the entire project. Use Copilot inline to write docstrings for a specific complex function. - SDLC Phase: CI/CD (Review)
Primary AI Tool: Gemini CLI (via Action)
Secondary AI Tool: Gemini API Key
Example Parallel Task: An automated GitHub Action reviews a PR (written with Copilot’s help) for style violations based on GEMINI.md.
Final Recommendation: Balancing Productivity, Complexity, and Security
This integrated, “parallel” system represents a state-of-the-art workflow that offers order-of-magnitude productivity gains. It accelerates boilerplate reduction (Copilot), high-level analysis , and project automation (CLI + Actions).
However, this power introduces new, non-trivial axes of complexity. The developer must now act as a “systems integrator” for their own development tools, managing multiple subscriptions (Microsoft, Google), distinct contexts (IDE, terminal, cloud), and separate authentication models.
The API key remains the central liability. The security architecture detailed in Section 4 is not an optional add-on; it is the non-negotiable prerequisite for this entire workflow.
The “parallel” workflow is the new standard for elite development. Its successful implementation hinges on strict role delineation:
- Use GitHub Copilot for writing code.
- Use the Gemini API for powering code.
- Use the Gemini CLI for managing and reviewing code.