Getting Started
Welcome to fast-crawl! This guide will help you install, configure, and run your own blazing-fast metasearch engine and web crawler.
Installation
1. Prerequisites
- Bun (recommended for best performance)
- Node.js (for some tooling)
- Docker (optional, for containerized deployment)
2. Clone the Repository
bash
git clone https://github.com/MarkusBansky/fast-crawl.git
cd fast-crawl
3. Install Dependencies
bash
bun install
4. Start the Server
bash
bun run start
- The API will be available at: http://localhost:3000/v1/search?query=your+search+query
- tRPC endpoint: http://localhost:3000/trpc
5. Docker Usage (Optional)
bash
docker build -t fast-crawl:local .
docker run -d -p 3000:3000 fast-crawl:local
Configuration
Set environment variables as needed:
DISABLE_EXTERNAL_APIS
(mock mode)REDIS_HOST
/REDIS_PORT
(for Valkey/Redis caching)OLLAMA_HOST
/OLLAMA_EMBED_MODEL
(for embeddings reranking)JWT_SECRET
/API_KEY
(for authentication)
Development
Test Coverage Standards
Fast-Crawl maintains high test coverage standards for code quality:
- Minimum Coverage: 85% across all metrics (statements, branches, functions, lines)
- Current Coverage: 88.65% with comprehensive test suite
- Total Tests: 273 tests across 35 test files
Run tests and check coverage:
bash
# Run all tests
bun run test
# Run tests with coverage report
bun run test:coverage
Development Practices
Bun-First Development (Required):
- Always use
bun
commands instead of npm/yarn - Install packages:
bun add <package>
(notnpm install
) - Run scripts:
bun run <script>
(notnpm run
) - Execute binaries:
bunx <command>
(notnpx
)
Code Quality:
- TypeScript strict mode enabled
- Comprehensive error handling and edge case testing
- Modular architecture with clear separation of concerns
- Performance optimization using Bun's superior speed and memory efficiency
Next Steps
For more, see the GitHub repo.