DecodedPapers
RSS FeedI'm Mayer Antoine. I translate AI research into working engineering practice. DecodedPapers documents how I take architectures and techniques from recent AI/ML papers, implement them on real datasets, and adapt them to solve practical problems — using standard tools and modest compute, under the same constraints most practitioners actually face. Most papers focus on benchmark results. I ask a different question: what can a practitioner actually extract and apply from this? Every post is a working implementation with documented decisions, adaptations, and tradeoffs — something you can run, customize, and carry into your own work.
Recent Posts
-
A Practitioner’s Guide to Multi-Document Summarization with RAG:Ask–Retrieve–Relate–Summarize
Implement the X-Sum framework for multi-document summarization using RAG, hybrid retrieval (ChromaDB + ColBERT), debate prompting, and SPECTER embeddings — with working Python code.
-
Build Your Own AI Literature Review Tool :A Step-by-Step Guide Using RAG and LLM Debate Ranking
Build a cost-effective AI literature review tool using RAG, BM25 hybrid retrieval, and debate-style LLM ranking — based on the LitLLM paper, with CLI and web app.
-
Evaluating Agentic RAG Systems - A Practical Guide with PubMedQA
Practical guide to evaluating agentic RAG on PubMedQA using RAGAS — covering context precision, faithfulness, answer correctness, and exact match with ChromaDB vector store.
-
Evidence-First AI - Implementing Agentic RAG with OpenAI Agents SDK
Updated:Build a PaperQA-inspired agentic RAG system using OpenAI Agents SDK — multi-agent orchestration, semantic search with ChromaDB, ReAct pattern, and LLM-as-judge reranking.