[{"content":"Welcome to my portfolio\n","date":"1 May 2026","externalUrl":null,"permalink":"/Portfolio/","section":"Abbas Portfolio","summary":"","title":"Abbas Portfolio","type":"page"},{"content":"","date":"1 May 2026","externalUrl":null,"permalink":"/Portfolio/tags/ai/","section":"Tags","summary":"","title":"AI","type":"tags"},{"content":" Automating RAG Workflows: Keeping Your AI Knowledge Always Updated # Imagine you built a chatbot for your portfolio site and it works perfectly on day one.\nIt knows your projects, your background, and the articles you wrote last week. Then you publish a new blog post, update a project, or change a section on your About page. The chatbot still answers with the old information because nobody refreshed its knowledge.\nThat is the problem with static knowledge in a RAG system. The model is only as good as the data it sees, and if the data is outdated, the answers will be outdated too.\nWhat Is a RAG Workflow? # A RAG workflow is the pipeline that keeps a retrieval-augmented generation system fed with fresh content.\nInstead of manually copying content into a chatbot knowledge base every time something changes, the workflow takes care of it for you. It reads source files, cleans them, prepares them for retrieval, and updates the knowledge file or index on a schedule or on every push.\nThink of it like a content sync job. The website changes, the knowledge layer follows.\nWhy Static Knowledge Becomes a Problem # At first, a static knowledge file feels simple. You export your content once, point the chatbot at it, and you are done.\nThe problem is that websites do not stay still. Developers fix typos, add case studies, rewrite summaries, and publish new posts. If the chatbot is not updated at the same pace, it starts drifting away from the real site.\nThat creates a few obvious issues.\nThe chatbot may answer with old project descriptions.\nIt may miss a newly published article.\nIt may quote text that was removed weeks ago.\nAnd from a user point of view, that is worse than not having a chatbot at all, because the answer sounds confident while being wrong.\nHow Automation Solves It # Automation fixes the boring part of the process.\nInstead of relying on a manual export, the workflow watches for changes, rebuilds the knowledge file, and ships it together with the site. That means the content source and the chatbot source stay aligned.\nThe idea is simple:\nRead the markdown files from the content folder. Clean the files so they contain only useful text. Write the result to a public knowledge file inside static/. Run the Hugo build. Deploy the site. That is a very normal developer workflow. It is just automation applied to content instead of code.\nHow It Works Step by Step # 1. The Repository Changes # A developer edits a blog post, updates a project page, or changes the About page.\n2. GitHub Actions Triggers # When the change lands on main, GitHub Actions starts the workflow.\n3. The Knowledge File Is Generated # Before Hugo builds the site, the workflow reads the markdown files and extracts only the parts that matter. Useful front matter like title, summary, date, tags, and external links can be kept. Decorative HTML, styles, and raw markup should be removed.\nThe result is a clean file that a chatbot can actually use.\n4. Hugo Builds the Site # Because the file lives in static/, Hugo copies it into the final public/ output automatically.\n5. GitHub Pages Deploys the Result # The deployed site now includes both the normal pages and the knowledge file. The chatbot can fetch the updated text from the published URL.\nArchitecture Overview # A practical setup has a few moving parts.\nSource Content # The source of truth is the markdown inside your Hugo content/ folder. That is where the real website content lives.\nGenerator Script # A small script reads those files and produces a text version for retrieval. This is where you clean out HTML, style blocks, and anything that would confuse the chatbot.\nStatic Output # The generated knowledge file lives in static/knowledge/portfolio-knowledge.txt. Hugo publishes it like any other static asset.\nChatbot Embed # The chatbot front-end points to the public knowledge URL. When the file updates, the chatbot sees the new content without needing a manual sync step.\nHere is the basic flow:\ncontent/*.md -\u0026gt; knowledge generator -\u0026gt; static/knowledge/portfolio-knowledge.txt -\u0026gt; Hugo build -\u0026gt; GitHub Pages -\u0026gt; chatbot reads updated content Example: GitHub Actions Updating a Chatbot Knowledge File # Let’s say you publish a new project post about a chatbot integration.\nWithout automation, you would need to remember to export the content, clean it, and upload it somewhere the chatbot can read.\nWith automation, the workflow does it for you.\n- name: Generate portfolio knowledge file run: bash .github/scripts/generate-knowledge.sh - name: Build the site run: hugo build --gc --minify That is the useful part. The build is not just making HTML. It is also refreshing the AI knowledge layer.\nIf the workflow is set up well, the chatbot can answer with the latest project list, newest article titles, and current descriptions right after deployment.\nWhat I Learned # The biggest thing I learned is that RAG is not only about retrieval. It is also about maintenance.\nIf the knowledge source is messy, the chatbot becomes messy. If the knowledge source is stale, the chatbot becomes stale. So the quality of the workflow matters just as much as the quality of the model prompt.\nI also learned that the best automation is usually the boring kind. No complex orchestration is needed here. A small script, a clean static file, and a build step are often enough.\nThat is good news for developers, because it means you can build something useful without overengineering it.\nChallenges and Trade-Offs # The first challenge is cleaning the input well enough. Markdown files often contain front matter, raw HTML, code blocks, embedded components, and content meant for browsers rather than chatbots.\nIf you pass all of that directly into a knowledge file, retrieval quality drops fast.\nAnother trade-off is freshness versus complexity. A full sync pipeline can get complicated if you try to handle every edge case. For a small portfolio, a simple workflow may be the better choice.\nThere is also a practical question about what to keep. Not every piece of metadata belongs in the chatbot. Useful fields like title, summary, date, tags, and externalUrl can help. Visual markup and layout code usually should not.\nConclusion # Automating a RAG workflow turns a chatbot from a one-time setup into a living part of the site.\nThe main idea is straightforward: keep the knowledge source close to the content source, update it automatically, and publish it with the rest of the site.\nFor a developer portfolio, that is a strong pattern because it keeps the chatbot honest. It does not have to guess what your site says. It can read the same content your visitors see.\nThat is the real value of automation here. It saves time, reduces drift, and makes the chatbot feel like part of the product instead of a separate toy.\n","date":"1 May 2026","externalUrl":null,"permalink":"/Portfolio/blog/automating-rag-workflows/","section":"Blog","summary":"","title":"Automating RAG Workflows: Keeping Your AI Knowledge Always Updated","type":"blog"},{"content":"","date":"1 May 2026","externalUrl":null,"permalink":"/Portfolio/tags/automation/","section":"Tags","summary":"","title":"Automation","type":"tags"},{"content":"Velkommen til min blog.\n","date":"1 May 2026","externalUrl":null,"permalink":"/Portfolio/blog/","section":"Blog","summary":"","title":"Blog","type":"blog"},{"content":"","date":"1 May 2026","externalUrl":null,"permalink":"/Portfolio/tags/development/","section":"Tags","summary":"","title":"Development","type":"tags"},{"content":"","date":"1 May 2026","externalUrl":null,"permalink":"/Portfolio/tags/","section":"Tags","summary":"","title":"Tags","type":"tags"},{"content":" Using a Code Agent as a Developer: From Manual Work to Automation # Imagine you are setting up a small feature on a portfolio site.\nYou need to update a workflow, add a knowledge generator, test the build, and make sure the site still deploys correctly. You can do all of that manually, but it takes time, context switching, and a lot of checking.\nA code agent changes that workflow. Instead of asking the developer to do every small step, the agent can inspect the codebase, make edits, run checks, and keep going until the task is done or it needs clarification.\nThat does not mean the developer becomes unnecessary. It means the developer shifts from typing every action by hand to directing the work at a higher level.\nWhat Is a Code Agent? # A code agent is an AI system that works inside a codebase with tools.\nIt can read files, search for symbols, suggest edits, run commands, and sometimes test or validate changes. The key difference from a normal chatbot is that it can act, not just talk.\nYou can think of it as a junior developer with very fast typing, perfect recall of the repository, and no judgment on its own. It still needs guidance, review, and boundaries.\nHow It Changes the Development Workflow # The old way of working is simple.\nYou open the repo, find the file, edit it, run the build, inspect the error, fix the issue, and repeat.\nThe code-agent way is different.\nYou describe the goal, and the agent handles the first pass. It searches the relevant files, makes the edit, validates the result, and reports back.\nThis is useful because a lot of developer time is not spent on hard thinking. It is spent on searching, wiring, repeating, and checking the same kinds of changes.\nManual Work vs Agent Work # Manual work gives you full control, but it also means you are responsible for every small step.\nAgent work gives you speed and momentum, but only if the task is scoped well.\nIf the task is clear, the agent can be very effective:\nFind the right file. Make the change. Check the result. Fix small mistakes. Report the outcome. If the task is vague, the agent can drift. Then you end up spending time correcting assumptions instead of getting value from the automation.\nThe best use case is often a task that is repetitive, local, and easy to verify.\nArchitecture Overview # A code agent usually sits between the user and the repository.\nUser Goal # The user describes the outcome they want, not every edit line by line.\nAgent Planner # The agent breaks the work into steps. For example: inspect the current workflow, update the generator, validate the build, and avoid changing unrelated files.\nTools # The agent uses tools such as file search, file reads, apply patch, terminal commands, and tests.\nValidation Loop # After each edit, the agent should run a check. That might be a syntax check, a build command, or a narrow test.\nThe loop looks like this:\nunderstand goal -\u0026gt; inspect code -\u0026gt; edit -\u0026gt; validate -\u0026gt; adjust -\u0026gt; finish That loop is what makes the agent feel useful instead of noisy.\nExample: Letting an Agent Set Up CI/CD or a Chatbot Embed # A good example is a small website automation task.\nSay you want to add a chatbot embed to a Hugo portfolio, or update a GitHub Actions workflow so it generates a knowledge file before deployment.\nManually, you would:\nFind the right layout override. Add the script in the right place. Make sure it does not appear twice. Update the workflow. Test the site locally. With a code agent, you can describe the goal more directly:\n\u0026ldquo;Add the chatbot globally, keep it before \u0026lt;/body\u0026gt;, and make sure the workflow still deploys the site correctly.\u0026rdquo;\nThe agent can inspect the layout structure, identify the override point, edit the file, and validate that the final HTML includes the script once on every page.\nHere is the kind of pseudo-code that reflects the workflow mindset:\ndef improve_repo(task): files = search_for_relevant_files(task) changes = propose_edits(files, task) apply_changes(changes) run_validation() return report_results() That looks simple, but it captures the value well. The agent removes a lot of mechanical work from the developer’s plate.\nWhat I Learned # The biggest lesson is that a code agent is most useful when the developer already knows what good looks like.\nIf you can explain the result clearly, the agent can often help you get there faster.\nI also learned that code agents are best for acceleration, not blind trust. They are great at reading a repo, making local changes, and handling routine tasks. They are not a replacement for understanding the system.\nIn practice, the developer still needs to review architecture choices, check for side effects, and decide whether the change really fits the project.\nChallenges and Trade-Offs # The first trade-off is precision.\nA code agent can be fast, but if the instructions are vague, it may choose the wrong file, change too much, or solve the wrong problem.\nThe second trade-off is verification.\nIf you let the agent make changes without checking them, you can end up with subtle regressions. The best setup is one where the agent must validate its own work before you accept it.\nThe third trade-off is scope.\nAgents are strongest when the task is bounded. Updating a workflow, wiring a layout override, or refactoring a small utility is a good fit. Designing an entire product architecture from scratch is a different kind of problem.\nThere is also a human factor. Some developers like full manual control. Others like delegation. In reality, the best workflow is usually a mix of both.\nConclusion # Using a code agent as a developer is less about replacing coding and more about changing the shape of the work.\nInstead of doing every repetitive step yourself, you guide the agent, review the output, and focus on the parts that actually need judgment.\nFor tasks like CI/CD setup, chatbot integration, or repeated repository changes, that can save a lot of time. For vague or high-risk work, manual control still matters.\nThat balance is the main lesson. A code agent is useful when it helps you move faster without losing understanding of the system.\nFor me, that makes it a practical tool rather than a gimmick. It is another way to work like a developer, just with more automation in the loop.\n","date":"1 May 2026","externalUrl":null,"permalink":"/Portfolio/blog/using-a-code-agent-as-a-developer/","section":"Blog","summary":"","title":"Using a Code Agent as a Developer: From Manual Work to Automation","type":"blog"},{"content":"","date":"20 April 2026","externalUrl":null,"permalink":"/Portfolio/tags/ai-agents/","section":"Tags","summary":"","title":"AI Agents","type":"tags"},{"content":" AI Assistants vs AI Agents: What Developers Should Know # Imagine two different ways of using AI.\nIn the first one, you open ChatGPT and ask: \u0026ldquo;Can you help me write a better email to a recruiter?\u0026rdquo; It gives you a polished draft. You edit it and send it yourself.\nIn the second one, you tell an AI system: \u0026ldquo;Find three companies hiring junior developers in Copenhagen, compare the roles, and draft tailored emails.\u0026rdquo; The system searches, reasons, uses tools, and comes back with progress.\nBoth examples use AI, but they are not the same kind of system. This is where people often mix up AI assistants and AI agents. For developers, the distinction matters because it changes how we design the system, how much control we give it, and what risks we need to handle.\nWhat Is an AI Assistant? # An AI assistant responds to user input. It helps, explains, drafts, suggests, or answers, but it usually waits for the user to tell it what to do next.\nCommon examples include customer support chatbots, coding copilots, writing assistants, and search assistants.\nThe key idea is that an assistant is reactive. It does not normally decide on its own that a task should continue, call multiple tools, or pursue a goal without more input.\nThat does not make assistants less valuable. Often, an assistant is exactly what you want: fast help, clear answers, and the human still in control.\nWhat Is an AI Agent? # An AI agent can work toward a goal with some level of autonomy. Instead of only responding once, an agent can plan steps, use tools, observe results, update its plan, and continue until the task is complete or it needs human input.\nAI agents often include tools, memory, planning, and a controller loop that decides what to do next.\nA simple analogy is the difference between an employee waiting for instructions and an employee who takes initiative. An assistant says, \u0026ldquo;Tell me what you need.\u0026rdquo; An agent says, \u0026ldquo;I understand the goal. I will make a plan and keep going.\u0026rdquo;\nKey Differences: Assistant vs Agent # The easiest way to compare them is behavior.\nAutonomy # An assistant waits for the user. An agent can continue after the initial instruction.\nGoal Execution # An assistant helps with a task. An agent tries to complete a goal.\nFor example, an assistant might draft an itinerary. An agent might find flights, check hotels, compare prices, and prepare booking options.\nTool Usage # Assistants can use tools, but agents depend on them more heavily. An agent might call APIs, query a database, read files, browse websites, or trigger workflows.\nMulti-Step Reasoning # Assistants can reason, but they often answer in a single turn. Agents usually work through multiple steps:\nUnderstand the goal. Decide what information is needed. Use a tool. Review the result. Choose the next action. That loop is what makes agents powerful, but also more complex.\nHow AI Agents Work Step by Step # A typical agent flow looks like this:\n1. Goal Input # The user gives the agent a goal: \u0026ldquo;Research three AI tools for note-taking and summarize which one is best for students.\u0026rdquo; The goal is broader than a normal question. It requires searching, comparing, and organizing information.\n2. Planning # The agent breaks the goal into steps: search for tools, collect pricing and features, compare strengths and weaknesses, and write a recommendation.\n3. Tool Usage # The agent uses tools such as web search, APIs, databases, email services, calendars, or code execution. The LLM decides when and how to use them.\n4. Memory and Iteration # After each tool call, the agent observes the result. It may store useful details in memory, update its plan, and continue. This matters because real tasks rarely work perfectly on the first try.\n5. Final Output # When the agent has enough information, it returns the final result: a summary, report, completed task, or set of recommended actions.\nArchitecture Overview # Most agent systems are built from a few core components.\nLLM # The LLM is the reasoning and language layer. It interprets the goal, decides what to do next, and writes the final response.\nTools # Tools are actions the agent can take outside the model: API calls, database queries, file operations, web browsing, or custom functions.\nMemory # Memory lets the agent keep track of context. This can be short-term memory inside the current task or long-term memory stored in a database.\nPlanner or Controller Loop # The planner decides the next step. The controller loop runs the pattern:\nthink -\u0026gt; act -\u0026gt; observe -\u0026gt; update -\u0026gt; repeat Here is a small pseudo-code example:\ndef run_agent(goal): memory = [] plan = create_plan(goal) while not task_is_complete(plan, memory): next_step = choose_next_step(plan, memory) result = use_tool(next_step) memory.append(result) plan = update_plan(plan, result) return write_final_answer(memory) This is simplified, but it shows the main idea: the agent keeps working until it reaches a useful result.\nReal-World Use Cases # AI assistants are a good fit when the user should stay closely involved: customer support, coding help, writing feedback, documentation explanations, and brainstorming.\nAI agents make more sense when the task has a clear goal and multiple steps: booking trips, automating workflows, researching topics, creating reports, or handling repetitive admin work.\nThe difference is not just technical. It is also about trust. More autonomy means we need better guardrails.\nSimple Example Project: A Research Agent # A practical beginner project is an agent that researches and summarizes a topic. The user asks:\n\u0026ldquo;Research the difference between React and Vue for a junior frontend developer.\u0026rdquo;\nBehind the scenes, the agent creates a plan, searches for sources, extracts key points, stores useful findings, checks the original goal, and writes a recommendation.\nThis is more than a chatbot response. The system is gathering information and improving the answer through steps.\nWhat I Learned Building My First AI Agent # The biggest lesson I learned is that the LLM is only one part of the system. The hard parts are often around the model: giving the agent useful tools, keeping the goal clear, preventing endless loops, logging each step, and knowing when to ask the user.\nBuilding an agent taught me that good AI engineering is still software engineering. The model matters, but so do state, error handling, permissions, and user experience.\nChallenges and Risks # AI agents are exciting, but they come with real trade-offs.\nHallucinations # If the model misunderstands a task or invents information, the agent may take the wrong action. This is especially risky when tools can send emails, update records, or make purchases.\nLack of Control # Autonomy is useful, but too much autonomy can make behavior hard to predict. Developers need clear boundaries, approvals, and logs.\nCost # Agents can make many API calls. A chatbot might call the model once. An agent might call the model, search the web, query a database, and repeat that cycle several times.\nSafety Concerns # Agents need permission design. Some actions should be read-only. Some should require user approval. Some should not be allowed at all.\nConclusion # Use an AI assistant when the user needs help, explanation, drafting, or suggestions while staying in control. Use an AI agent when the user has a clear goal that requires multiple steps, tool usage, memory, and some level of independent execution.\nThis distinction matters because AI development is moving beyond single chat responses. We are building systems that interact with real tools, data, and workflows. That makes the software more useful, but also more responsible.\nThis is why I find AI agents exciting as a developer. They are not smarter chatbots. They are software that can reason, act, and collaborate on real tasks.\n","date":"20 April 2026","externalUrl":null,"permalink":"/Portfolio/blog/ai-assistants-vs-ai-agents/","section":"Blog","summary":"","title":"AI Assistants vs AI Agents: What Developers Should Know","type":"blog"},{"content":"","date":"20 April 2026","externalUrl":null,"permalink":"/Portfolio/tags/llm/","section":"Tags","summary":"","title":"LLM","type":"tags"},{"content":"","date":"20 April 2026","externalUrl":null,"permalink":"/Portfolio/tags/machine-learning/","section":"Tags","summary":"","title":"Machine Learning","type":"tags"},{"content":"","date":"20 April 2026","externalUrl":null,"permalink":"/Portfolio/tags/rag/","section":"Tags","summary":"","title":"RAG","type":"tags"},{"content":" Retrieval-Augmented Generation: Making LLMs Useful With Your Own Data # Large language models are impressive, but they do not automatically know your data.\nAsk a general LLM about a company\u0026rsquo;s internal return policy, a new product release, or a PDF contract, and it may answer confidently even when it is missing the facts. That is where hallucinations happen. The model can invent details, rely on outdated knowledge, or give a generic answer that sounds right without being useful.\nImagine a customer support bot for an online store. A customer asks:\n\u0026ldquo;Can I return a discounted item after 20 days?\u0026rdquo;\nIf the bot only uses what the model already \u0026ldquo;knows,\u0026rdquo; it might answer, \u0026ldquo;Most stores allow returns within 30 days.\u0026rdquo; But the actual policy could say discounted items must be returned within 14 days. The answer sounds friendly, but it is wrong.\nRetrieval-Augmented Generation, usually called RAG, is designed for this problem.\nWhat Is RAG? # RAG is a pattern where the language model does not answer from memory alone. The system first retrieves relevant information from an external data source, then gives that information to the model as context.\nRetrieval finds the most relevant documents or text snippets. Augmentation adds those snippets to the prompt. Generation lets the LLM write a clear answer based on that context. A useful analogy is an exam. A normal LLM is like taking a closed book exam. A RAG system is more like an open book exam: the model still needs reasoning skills, but it gets to look at the right pages before answering.\nHow RAG Works Step by Step # RAG sounds advanced at first, but the basic flow is easy to understand.\n1. The User Asks a Question # The process starts with a normal question:\n\u0026ldquo;What is our refund policy for discounted items?\u0026rdquo;\nThe user just asks in natural language.\n2. The System Retrieves Relevant Context # Before asking the LLM to answer, the system searches your data: PDFs, help center articles, product documentation, internal notes, or database records.\nDocuments are split into smaller pieces called chunks. Each chunk is converted into an embedding: a list of numbers that represents the meaning of the text. Those embeddings are stored in a vector database. When the user asks a question, the question is also embedded, and the database finds chunks with similar meaning.\n3. The Prompt Is Augmented # Once the relevant chunks are found, they are added to the prompt sent to the LLM.\nUse the following policy to answer the user\u0026#39;s question. If the answer is not in the context, say that you do not know. Context: \u0026#34;Discounted items can be returned within 14 days of purchase. Full-price items can be returned within 30 days.\u0026#34; Question: \u0026#34;Can I return a discounted item after 20 days?\u0026#34; 4. The LLM Generates the Answer # Finally, the LLM writes a response:\n\u0026ldquo;No. According to the policy, discounted items can be returned within 14 days of purchase. After 20 days, the item would no longer be eligible for return.\u0026rdquo;\nThe answer is grounded in retrieved context.\nArchitecture Overview # A simple RAG system has four main parts:\nEmbeddings # Embeddings turn text into numerical vectors. Texts with similar meanings should end up close to each other, even when the words are different.\nVector Database # A vector database stores embeddings and makes similarity search fast. Common examples include Pinecone, Weaviate, Qdrant, Chroma, and Milvus. Its job is to find the text most relevant to a question.\nRetriever # The retriever sends the query to the vector database and selects the best matching chunks. Stronger systems may also filter by metadata or rerank results.\nLLM # The LLM receives the question plus the retrieved context, then produces a useful answer in natural language.\nThe flow looks like this:\nUser question -\u0026gt; Embedding model -\u0026gt; Vector database search -\u0026gt; Relevant document chunks -\u0026gt; Augmented prompt -\u0026gt; LLM -\u0026gt; Final answer Why Use RAG? # RAG solves real application problems without requiring a full model training process.\nThe main benefits are:\nMore accurate answers because the model uses specific source material. Up-to-date data because documents can be refreshed without retraining. Less hallucination because the model has concrete context. No need for fine-tuning in many knowledge-heavy projects. Better transparency because you can show which sources were used. For developers, RAG feels like normal application architecture: documents, APIs, databases, background jobs, and a user interface.\nSimple Example Project: Chat With Your PDFs # A classic beginner RAG project is \u0026ldquo;chat with your PDFs.\u0026rdquo; The user uploads one or more PDF files, then asks questions:\n\u0026ldquo;What are the main responsibilities listed in this job contract?\u0026rdquo;\nBehind the scenes, the app:\nExtracts text from the PDF. Splits the text into chunks. Creates embeddings for each chunk. Stores those embeddings in a vector database. Converts the user\u0026rsquo;s question into an embedding. Retrieves the most relevant chunks. Sends the chunks and question to the LLM. Returns a readable answer, ideally with source references. Here is a small pseudo-code example:\ndef answer_question(question): question_embedding = embed(question) chunks = vector_db.search(question_embedding, top_k=5) prompt = build_prompt( context=chunks, question=question ) return llm.generate(prompt) This is not production-ready code, but it shows the shape of the system: prepare the right context before calling the LLM.\nChallenges and Trade-Offs # RAG is useful, but it is not magic.\nLatency # RAG adds work before the LLM responds: embedding the query, searching the vector database, building a prompt, and calling the model. Each step takes time.\nRetrieval Quality # If the retriever finds the wrong chunks, the LLM may answer from the wrong context. Improving retrieval often means testing chunk sizes, adding metadata filters, using hybrid search, or reranking.\nChunking Problems # Documents must be split before they are embedded. If chunks are too small, they may lose context. If chunks are too large, they may include too much noise.\nIf a refund deadline is in one paragraph and an exception is in the next, bad chunking may hide the exception.\nConclusion # RAG and fine-tuning are often mentioned together, but they solve different problems. Use RAG when the model needs access to specific knowledge: internal documentation, changing policies, product catalogs, support articles, or technical documents.\nUse fine-tuning when you want to change behavior or style: a specific tone, a repeated output format, or a narrow task that must be handled consistently.\nIn many projects, RAG is the first thing I would try for knowledge-heavy use cases. It is easier to update, easier to inspect, and usually more practical than training a model every time your data changes.\nFor junior developers, RAG is a great learning path because it combines backend development, data processing, search, APIs, and prompt design. For recruiters, it shows that a developer can move beyond demos and build something closer to a real product.\nThis is why I find RAG exciting as a developer. It turns an LLM from a clever text generator into a system that can work with the information users actually care about.\n","date":"20 April 2026","externalUrl":null,"permalink":"/Portfolio/blog/retrieval-augmented-generation/","section":"Blog","summary":"","title":"Retrieval-Augmented Generation: Making LLMs Useful With Your Own Data","type":"blog"},{"content":" Computer Science student in Denmark\nHi, I'm Abbas. I'm building my way into software development. My background is not a straight line, and that is one of the things I value most about it. I have worked as a firefighter, completed military service, and built experience in sales and insurance advising. Today, I am turning that experience into a focused career in tech. View Projects Read Blog What I Bring Discipline Firefighting and military service taught me structure, responsibility, and calm decision-making.\nCommunication Sales and advising taught me how to listen, explain clearly, and understand real user needs.\nConsistency Balancing studies and family life has made long-term progress more important than quick wins.\nMy Developer Focus I am currently studying Computer Science and building a strong technical foundation with Java, object-oriented programming, backend concepts, and practical projects. I enjoy learning how systems fit together: clean code, databases, APIs, and user-facing features. I am especially interested in building software that solves concrete problems and is easy for people to use. What I Am Working On Improving my Java and object-oriented programming skills through hands-on projects. Writing about AI, software development, and what I learn while building. Turning my portfolio into a clear picture of how I think, learn, and solve problems. Why This Matters To Me I am not just changing careers. I am building a future with intention. My goal is to become a reliable developer who brings technical skill, maturity, and real-world perspective to a team. Tech Stack Backend Java C# Python Spring Boot Backend Frameworks Javalin Thymeleaf Hibernate Frontend JavaScript React CSS Databases SQL PostgreSQL MySQL Database Design DevOps Docker Docker Hub GitHub Actions Watchtower Caddy CI/CD Tools Bash PowerShell AI and Data Business Intelligence PowerBI Jupyter Notebook Data Analysis Data Visualization Machine Learning ","externalUrl":null,"permalink":"/Portfolio/about/","section":"About","summary":"","title":"About","type":"about"},{"content":"","externalUrl":null,"permalink":"/Portfolio/authors/","section":"Authors","summary":"","title":"Authors","type":"authors"},{"content":"","externalUrl":null,"permalink":"/Portfolio/categories/","section":"Categories","summary":"","title":"Categories","type":"categories"},{"content":"Here you can find my projects, which I developed during my studies in Computer Science.\n","externalUrl":null,"permalink":"/Portfolio/projects/","section":"Projects","summary":"","title":"Projects","type":"projects"},{"content":"","externalUrl":null,"permalink":"/Portfolio/series/","section":"Series","summary":"","title":"Series","type":"series"}]