A brand new feature in LLMs is “understanding” text. Pretty much all of the knowledge work can be abstracted to a simple instruction: Answer my question using the relevant context. LLM hype claims it's solved. I’ll show you that it’s very much not.
In this part, we’ll be looking at Model Context Protocol (MCP). We’ll cover the features (tools, resources, prompts, roots, sampling), how it works, how to communicate with it (both locally and remotely).
Vibe coding has taken the world by storm, and I’ve realized that I am behind. Now I'm correcting that mistake.
In this tutorial we’ll take a look on how to add LLM (Mistral AI) capabilities into neovim (since I don’t want to use Cursor).
Talk about security best practices for Copilot solutions, taking inspiration from MS GitHub Copilot and OWASP Top10 for LLMs.
Are there any options on how to prevent GitHub Copilot to process our secrets?