API Proxy Server Documentation

Overview

This is a Cloudflare Worker script functioning as a proxy server for various AI models and APIs. It acts as a gateway between clients and different AI providers, offering features like caching, rate limiting, and security.

Configuration

Constants

Main Functions

1. API Endpoint Handling

The server handles various routes:

2. Supported AI Providers

const models = {
  "nvidia", "openrouter", "deepinfra", "groq", "ollama",
  "azure", "grok", "x.ai", "gemini", "typegpt", "pollinations",
  "api.airforce", "gpt4free.pro", "nectar", "audio", "perplexity",
  "huggingface", "puter"
}

3. Security Features

Key Functions

handleRequest(request, env, ctx)

Main function that processes all incoming requests:

  1. OPTIONS requests: Handles CORS preflight
  2. Model list endpoints: Returns available models
  3. Caching: Caches GET/HEAD responses
  4. Rate limiting: Protects against abuse
  5. Content validation: Enforces request size limit
  6. Routing: Forwards requests to appropriate providers

forwardApi(request, newUrl, liteRequest, ctx, cache_control)

Forwards API requests with caching and CORS support.

forwardWorker(env, request, liteRequest, pathname, ctx)

Special handling for Cloudflare AI worker endpoints.

shield(url, options)

Performs safety checks:

retrieveCache(request, liteRequest, pathname, ctx, host, cache_control)

Caching mechanism for repeated requests.

Caching Strategy

Error Handling

Special Features

Deployment

This code is optimized for Cloudflare Workers with:

This proxy server provides a robust, scalable API gateway that enhances security, performance, and flexibility for integrating various AI services.