ОПИС ВАКАНСІЇ
AI/ML & Full-Stack Python Developer
Embeddings, Knowledge-Graph & End-to-End Deployment
(Python • FastAPI • Semantic Kernel • React/Next.js)
About the Project
We’re building an AI-Native Knowledge Management System (KMS) that transforms unstructured docs, support tickets, and chat logs into a living, self-maintaining knowledge base.
At its core is Microsoft Semantic Kernel orchestrating multiple LLM-powered agents (auto-tagging, conflict detection, Q&A, SOP compliance). The system is modular and built around a swappable AI Core, enabling tech stack flexibility.
We’re looking for a hybrid Full-Stack Python Engineer who can both build powerful AI pipelines and ship them into production via a modern full-stack Python + JS stack. This is a founding level role, not a corporate job.
Tech Stack
- Backend: Python • FastAPI • PostgreSQL + pgvector • Redis
- AI: Microsoft Semantic Kernel orchestration; adapters for OpenAI GPT, Gemini, DeepSeek, Local Llama, SentenceTransformers
- Frontend: Next.js + Tailwind
- DevOps: Docker • Kubernetes • GitHub Actions
What You’ll Actually Do
You’ll be the technical glue between our AI brains and the product backend. This isn’t just about models or endpoints — it’s about building the right structure to make our AI-native system work like magic.
Backend Engineering (40−50%)
- Design and evolve the block-based backend architecture: the structure that stores and connects millions of knowledge units.
- Develop key backend systems for user/team management, roles/permissions, and content access.
- Update and maintain the PostgreSQL/pgvector schema, unblock AI team members when data access or structure changes are needed.
- Build FastAPI endpoints to connect the AI Core with the frontend and support LLM agents.
- Create background workers for indexing, scoring, data syncs, etc. (async tasks, queues).
- Collaborate closely with frontend devs and AI engineers to ensure a smooth data flow across the stack.
AI/ML Engineering (50−60%)
- Build and maintain AI services: chunking, embedding, semantic search, duplicate detection.
- Optimize hybrid search flows (BM25 + vector) and retrieval filters.
- Implement FastAPI endpoints that serve real-time AI output to the frontend and other agents.
- Prototype and deploy conflict detection logic (cosine sim, MinHash, ML models).
- Help manage the pluggable model registry and benchmark cost vs performance.
Why this hybrid role matters: Our AI engineers move fast but are often blocked when backend support is missing — schema changes, missing APIs,async workflows. That’s where you come in. Because you also speak the «AI language,» you won’t just unblock — you’ll co-create smarter systems, and guide architecture from a product-aware, AI-native perspective.
Must-Have Skills
- Python (Backend): 2+ years building and deploying real-world services (FastAPI, asyncio, Pydantic). You can structure clean APIs, manage async tasks, and debug live services.
- Database Design: Strong experience with PostgreSQL, pgvector, and Redis. You know how to structure and evolve schemas — especially for modular block-based systems.
- Backend Architecture: You understand how to build scalable, maintainable backend systems that support AI flows. You’ve worked with modular content structures or knowledge blocks.
- AI/ML Integration: Hands-on with embeddings (OpenAI, SBERT), similarity search, and LLM-based pipelines. You don’t need to invent new models — but you know how to use them effectively.
- Vector Search: Comfortable working with pgvector, HNSW indexes, hybrid search (BM25 + vectors), and retrieval tuning.
- Full-Stack Collaboration: You’ve worked with frontend teams (React/Next.js) and understand how to build APIs that serve dynamic UIs.
- DevOps Basics: Docker, GitHub Actions, debugging deployments, profiling CPU/GPU usage — you’re not DevOps, but you’re DevOps-aware.
- Startup Mindset: You can work independently, make decisions, communicate clearly, and unblock others without waiting for specs.
Nice-to-Haves
- Frontend experience (React + Tailwind, SSR with Next.js)
- Prompt engineering & RAG architectures
- Yjs / CRDTs for real-time collaboration
- Knowledge-graphs or domain ontologies
- Rust or Go for performance bottlenecks
- Prior experience with compliance or enterprise-scale knowledge management systems
What We Offer
- Partnership Potential: You’ll have the opportunity to receive equity/share options and become a co-owner of the project.
- Ownership from Day 1: You’re not a cog — you’ll be leading core architecture.
- AI-Native from Ground-Up:Design and deploy cutting-edge AI services in production. Not a chatbot plugin, but a truly AI-driven UX.
- Greenfield Codebase: No legacy baggage, weekly iterations, clear goals, real usage. See your models power real user-facing features.
- Remote-first: Work from anywhere, fully async, EU-friendly hours.
Compensation
- Trial period: 35,000 UAH/month
- After trial: Base compensation will be adjusted based on performance and responsibilities, typically between 35,000 and 70,000 UAH/month
- Long-term potential to switch to a co-ownership model with equity or stock options
- We value builders who want to grow with the product — not short gigs.
How to Apply
- Submit your CV and a few words about a product/feature you built that you’re proud of (bonus if it is KMS and uses ML or AI).
- GitHub / blog / demo links (if you have them).
We’re not hiring for a role — we’re looking for a partner. If you love building, want real impact, and are excited about shipping AI products — let’s talk.
Контактна інформація →