﻿﻿{"id":1218,"date":"2025-09-26T18:40:03","date_gmt":"2025-09-26T16:40:03","guid":{"rendered":"https:\/\/elearningsamba.com\/index.php\/lapi-qui-manquait-a-ollama-pour-concurrencer-chatgpt-est-enfin-la\/"},"modified":"2025-09-26T18:40:03","modified_gmt":"2025-09-26T16:40:03","slug":"lapi-qui-manquait-a-ollama-pour-concurrencer-chatgpt-est-enfin-la","status":"publish","type":"page","link":"https:\/\/elearningsamba.com\/index.php\/lapi-qui-manquait-a-ollama-pour-concurrencer-chatgpt-est-enfin-la\/","title":{"rendered":"L&#8217;API qui manquait \u00e0 Ollama pour concurrencer ChatGPT est enfin l\u00e0 !!"},"content":{"rendered":"<p>Ce qui est super relou avec les IA qu\u2019on peut utiliser en local, genre avec Ollama, c\u2019est que si on lui demande des infos un peu trop r\u00e9cente, \u00e7a nous sort des vieux chiffres de 2023 avec la confiance d\u2019un vendeur de voitures d\u2019occasion. Bon bah \u00e7a, c\u2019est fini puisqu\u2019<br \/>\n<a href=\"https:\/\/ollama.com\/blog\/web-search\">Ollama vient de sortir une API de recherche web<\/a><br \/>\nqui permet enfin \u00e0 vos mod\u00e8les locaux d\u2019acc\u00e9der \u00e0 des infos fra\u00eeches dispo sur le net.<\/p>\n<p>Woohoo o\/ !<\/p>\n<p>Baptis\u00e9e <strong>Ollama Web Search<\/strong>, cette API REST permet donc \u00e0 vos mod\u00e8les de faire des recherches sur le web en temps r\u00e9el comme \u00e7a plus besoin de se contenter des donn\u00e9es d\u2019entra\u00eenement fig\u00e9es dans le temps.<br \/>\n<a href=\"https:\/\/docs.ollama.com\/web-search\">Selon la doc officielle<\/a><br \/>\n, l\u2019API fournit \u201c<em>les derni\u00e8res informations du web pour r\u00e9duire les hallucinations et am\u00e9liorer la pr\u00e9cision<\/em>\u201d. En gros, votre IA locale devient aussi \u00e0 jour que ChatGPT, mais sans envoyer vos donn\u00e9es perso \u00e0 OpenAI.<\/p>\n<p>Les mod\u00e8les compatibles avec cette nouvelle fonctionnalit\u00e9 incluent qwen3, LLama, gpt-oss (la version open source d\u2019OpenAI), deepseek-v3.1, et plein d\u2019autres.<br \/>\n<a href=\"https:\/\/github.com\/ollama\/ollama\">Et d\u2019apr\u00e8s les premiers tests de la communaut\u00e9<\/a><br \/>\n, qwen3 et gpt-oss sont m\u00eame plut\u00f4t dou\u00e9s pour exploiter cette fonctionnalit\u00e9. Le mod\u00e8le comprend qu\u2019il lui manque une info, fait sa recherche, analyse les r\u00e9sultats et nous sort une r\u00e9ponse document\u00e9e !<\/p>\n<p>C\u2019est trop incrrrr ! Vous allez pouvoir booster vos scripts \/ bots \/ outils d\u2019IA locale pour qu\u2019ils puissent surveiller des choses dispo en ligne, les comparer, g\u00e9n\u00e9rer des r\u00e9sum\u00e9s \u00e0 partir de sites web, fact checker ou compl\u00e9ter des infos\u2026etc.<\/p>\n<p>Mais alors comment s\u2019en servir ? Bon, on est vendredi soir et j\u2019ai la flemme de tourner un tuto vid\u00e9o, donc m\u00eame si je risque de d\u00e9tailler tout \u00e7a bient\u00f4t \u00e0<br \/>\n<a href=\"https:\/\/patreon.com\/korben\">mes Patreons d\u2019amour<\/a><br \/>\n, voici quand m\u00eame quelques explications.<\/p>\n<p>D\u2019abord, il faut cr\u00e9er une<br \/>\n<a href=\"https:\/\/ollama.com\/signin\">cl\u00e9 API Ollama<\/a><br \/>\n. La doc explique que vous avez un essai gratuit g\u00e9n\u00e9reux pour commencer, mais s\u2019il vous en faut plus, il faudra prendre un petit abonnement<br \/>\n<a href=\"https:\/\/ollama.com\/cloud\">Ollama Cloud<\/a><br \/>\n\u2026<\/p>\n<p>Une fois votre cl\u00e9 en poche, exportez-la dans votre environnement comme ceci :<\/p>\n<p><code>export OLLAMA_API_KEY=\"votre_cl\u00e9_ici\" <\/code><\/p>\n<p>Le plus simple ensuite pour tester, c\u2019est avec curl :<\/p>\n<p><code>curl https:\/\/ollama.com\/api\/web_search  --header \"Authorization: Bearer $OLLAMA_API_KEY\"  -d '{ \"query\": \"derni\u00e8res vuln\u00e9rabilit\u00e9s CVE janvier 2025\" }' <\/code><\/p>\n<p>Mais bon, soyons honn\u00eates, on va plut\u00f4t utiliser Python car c\u2019est quand m\u00eame plus cool \ud83d\ude09 . Voici donc un exemple de script basique qui compare une r\u00e9ponse avec et sans recherche web :<\/p>\n<div class=\"highlight\">\n<pre class=\"chroma\"><code class=\"language-gdscript3\" data-lang=\"gdscript3\"><span class=\"line\"><span class=\"cl\"><span class=\"n\">import<\/span> <span class=\"n\">ollama<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"n\">from<\/span> <span class=\"n\">ollama<\/span> <span class=\"n\">import<\/span> <span class=\"n\">chat<\/span><span class=\"p\">,<\/span> <span class=\"n\">web_search<\/span><span class=\"p\">,<\/span> <span class=\"n\">web_fetch<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\">\n<\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"n\">model<\/span> <span class=\"o\">=<\/span> <span class=\"s2\">\"qwen3:4b\"<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\">\n<\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"c1\"># 1. Sans recherche web<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"n\">response_classic<\/span> <span class=\"o\">=<\/span> <span class=\"n\">chat<\/span><span class=\"p\">(<\/span> <span class=\"c1\"># pas ollama.chat<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\"> <span class=\"n\">model<\/span><span class=\"o\">=<\/span><span class=\"n\">model<\/span><span class=\"p\">,<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\"> <span class=\"n\">messages<\/span><span class=\"o\">=<\/span><span class=\"p\">[{<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\"> <span class=\"s2\">\"role\"<\/span><span class=\"p\">:<\/span> <span class=\"s2\">\"user\"<\/span><span class=\"p\">,<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\"> <span class=\"s2\">\"content\"<\/span><span class=\"p\">:<\/span> <span class=\"s2\">\"Quelles sont les features de React 19?\"<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\"> <span class=\"p\">}]<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"p\">)<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"nb\">print<\/span><span class=\"p\">(<\/span><span class=\"s2\">\"Sans recherche web:\"<\/span><span class=\"p\">,<\/span> <span class=\"n\">response_classic<\/span><span class=\"o\">.<\/span><span class=\"n\">message<\/span><span class=\"o\">.<\/span><span class=\"n\">content<\/span><span class=\"p\">[:<\/span><span class=\"mi\">500<\/span><span class=\"p\">])<\/span> <span class=\"c1\"># .message.content<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\">\n<\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"c1\"># 2. Avec recherche web<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"n\">search_results<\/span> <span class=\"o\">=<\/span> <span class=\"n\">web_search<\/span><span class=\"p\">(<\/span><span class=\"s2\">\"React 19 features derni\u00e8res nouveaut\u00e9s\"<\/span><span class=\"p\">)<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"nb\">print<\/span><span class=\"p\">(<\/span><span class=\"s2\">\"R\u00e9sultats:\"<\/span><span class=\"p\">,<\/span> <span class=\"n\">search_results<\/span><span class=\"p\">)<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\">\n<\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"c1\"># 3. Avec outils<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"n\">available_tools<\/span> <span class=\"o\">=<\/span> <span class=\"p\">{<\/span><span class=\"s1\">'web_search'<\/span><span class=\"p\">:<\/span> <span class=\"n\">web_search<\/span><span class=\"p\">,<\/span> <span class=\"s1\">'web_fetch'<\/span><span class=\"p\">:<\/span> <span class=\"n\">web_fetch<\/span><span class=\"p\">}<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"n\">messages<\/span> <span class=\"o\">=<\/span> <span class=\"p\">[{<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\"> <span class=\"s2\">\"role\"<\/span><span class=\"p\">:<\/span> <span class=\"s2\">\"user\"<\/span><span class=\"p\">,<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\"> <span class=\"s2\">\"content\"<\/span><span class=\"p\">:<\/span> <span class=\"s2\">\"Utilise la recherche web pour me dire les derni\u00e8res features de React 19\"<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"p\">}]<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\">\n<\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"n\">response_with_tools<\/span> <span class=\"o\">=<\/span> <span class=\"n\">chat<\/span><span class=\"p\">(<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\"> <span class=\"n\">model<\/span><span class=\"o\">=<\/span><span class=\"n\">model<\/span><span class=\"p\">,<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\"> <span class=\"n\">messages<\/span><span class=\"o\">=<\/span><span class=\"n\">messages<\/span><span class=\"p\">,<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\"> <span class=\"n\">tools<\/span><span class=\"o\">=<\/span><span class=\"p\">[<\/span><span class=\"n\">web_search<\/span><span class=\"p\">,<\/span> <span class=\"n\">web_fetch<\/span><span class=\"p\">],<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\"> <span class=\"n\">think<\/span><span class=\"o\">=<\/span><span class=\"n\">True<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"p\">)<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\">\n<\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"c1\"># Acc\u00e8s aux tool_calls<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"k\">if<\/span> <span class=\"n\">response_with_tools<\/span><span class=\"o\">.<\/span><span class=\"n\">message<\/span><span class=\"o\">.<\/span><span class=\"n\">tool_calls<\/span><span class=\"p\">:<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\"> <span class=\"k\">for<\/span> <span class=\"n\">tool_call<\/span> <span class=\"ow\">in<\/span> <span class=\"n\">response_with_tools<\/span><span class=\"o\">.<\/span><span class=\"n\">message<\/span><span class=\"o\">.<\/span><span class=\"n\">tool_calls<\/span><span class=\"p\">:<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\"> <span class=\"n\">function_to_call<\/span> <span class=\"o\">=<\/span> <span class=\"n\">available_tools<\/span><span class=\"o\">.<\/span><span class=\"n\">get<\/span><span class=\"p\">(<\/span><span class=\"n\">tool_call<\/span><span class=\"o\">.<\/span><span class=\"n\">function<\/span><span class=\"o\">.<\/span><span class=\"n\">name<\/span><span class=\"p\">)<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\"> <span class=\"k\">if<\/span> <span class=\"n\">function_to_call<\/span><span class=\"p\">:<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\"> <span class=\"n\">args<\/span> <span class=\"o\">=<\/span> <span class=\"n\">tool_call<\/span><span class=\"o\">.<\/span><span class=\"n\">function<\/span><span class=\"o\">.<\/span><span class=\"n\">arguments<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\"> <span class=\"n\">result<\/span> <span class=\"o\">=<\/span> <span class=\"n\">function_to_call<\/span><span class=\"p\">(<\/span><span class=\"o\">**<\/span><span class=\"n\">args<\/span><span class=\"p\">)<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\"> <span class=\"nb\">print<\/span><span class=\"p\">(<\/span><span class=\"n\">f<\/span><span class=\"s2\">\"Outil utilis\u00e9: {tool_call.function.name}\"<\/span><span class=\"p\">)<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\"> <span class=\"nb\">print<\/span><span class=\"p\">(<\/span><span class=\"n\">f<\/span><span class=\"s2\">\"R\u00e9sultat: {str(result)[:500]}...\"<\/span><span class=\"p\">)<\/span>\n<\/span><\/span><span class=\"line\"><span class=\"cl\">\n<\/span><\/span><span class=\"line\"><span class=\"cl\"><span class=\"nb\">print<\/span><span class=\"p\">(<\/span><span class=\"s2\">\"R\u00e9ponse finale:\"<\/span><span class=\"p\">,<\/span> <span class=\"n\">response_with_tools<\/span><span class=\"o\">.<\/span><span class=\"n\">message<\/span><span class=\"o\">.<\/span><span class=\"n\">content<\/span><span class=\"p\">)<\/span>\n<\/span><\/span><\/code><\/pre>\n<p>Les performances varient ensuite selon les mod\u00e8les. Qwen3:4b est parfait pour du temps r\u00e9el avec environ 85 tokens\/seconde. GPT-OSS:120b est plus lent mais donne des r\u00e9sultats de qualit\u00e9 id\u00e9aux pour de la production. Pour du dev local, je vous recommande qwen3:8b, c\u2019est le bon compromis entre vitesse et intelligence.<\/p>\n<p>Le truc cool, c\u2019est que vous pouvez maintenant cr\u00e9er des agents sp\u00e9cialis\u00e9s. Genre un agent DevOps qui surveille les CVE de vos d\u00e9pendances, un agent Marketing qui analyse les tendances de votre secteur, ou un agent Support qui maintient une base de connaissances \u00e0 jour.<\/p>\n<p>Voici un exemple :<\/p>\n<div class=\"highlight\">\n<pre class=\"chroma\"><code class=\"language-fallback\" data-lang=\"fallback\"><span class=\"line\"><span class=\"cl\">import ollama\n<\/span><\/span><span class=\"line\"><span class=\"cl\">from ollama import chat, web_search\n<\/span><\/span><span class=\"line\"><span class=\"cl\">\n<\/span><\/span><span class=\"line\"><span class=\"cl\">class SecurityAgent:\n<\/span><\/span><span class=\"line\"><span class=\"cl\"> def __init__(self):\n<\/span><\/span><span class=\"line\"><span class=\"cl\"> self.model = \"qwen3:4b\"\n<\/span><\/span><span class=\"line\"><span class=\"cl\">\n<\/span><\/span><span class=\"line\"><span class=\"cl\"> def check_vulnerabilities(self, technologies):\n<\/span><\/span><span class=\"line\"><span class=\"cl\"> rapport = \"\ud83d\udee1\ufe0f RAPPORT S\u00c9CURIT\u00c9nn\"\n<\/span><\/span><span class=\"line\"><span class=\"cl\">\n<\/span><\/span><span class=\"line\"><span class=\"cl\"> for tech in technologies:\n<\/span><\/span><span class=\"line\"><span class=\"cl\"> # Recherche directe des CVE r\u00e9centes\n<\/span><\/span><span class=\"line\"><span class=\"cl\"> results = web_search(f\"{tech} CVE vulnerabilities 2025 critical\")\n<\/span><\/span><span class=\"line\"><span class=\"cl\">\n<\/span><\/span><span class=\"line\"><span class=\"cl\"> # Demande au mod\u00e8le d'analyser\n<\/span><\/span><span class=\"line\"><span class=\"cl\"> response = chat(\n<\/span><\/span><span class=\"line\"><span class=\"cl\"> model=self.model,\n<\/span><\/span><span class=\"line\"><span class=\"cl\"> messages=[{\n<\/span><\/span><span class=\"line\"><span class=\"cl\"> \"role\": \"user\",\n<\/span><\/span><span class=\"line\"><span class=\"cl\"> \"content\": f\"R\u00e9sume les vuln\u00e9rabilit\u00e9s critiques de {tech}: {results}\"\n<\/span><\/span><span class=\"line\"><span class=\"cl\"> }]\n<\/span><\/span><span class=\"line\"><span class=\"cl\"> )\n<\/span><\/span><span class=\"line\"><span class=\"cl\">\n<\/span><\/span><span class=\"line\"><span class=\"cl\"> rapport += f\"### {tech}n{response.message.content}nn\"\n<\/span><\/span><span class=\"line\"><span class=\"cl\">\n<\/span><\/span><span class=\"line\"><span class=\"cl\"> return rapport\n<\/span><\/span><span class=\"line\"><span class=\"cl\">\n<\/span><\/span><span class=\"line\"><span class=\"cl\"># Utilisation\n<\/span><\/span><span class=\"line\"><span class=\"cl\">agent = SecurityAgent()\n<\/span><\/span><span class=\"line\"><span class=\"cl\">rapport = agent.check_vulnerabilities([\"Node.js\", \"PostgreSQL\", \"Docker\"])\n<\/span><\/span><span class=\"line\"><span class=\"cl\">print(rapport)\n<\/span><\/span><\/code><\/pre>\n<p>Maintenant, pour optimiser un peu tout \u00e7a et ne pas flamber votre quota API, voici quelques astuces assez classiques\u2026 D\u2019abord, mettez en cache les r\u00e9sultats. Ensuite, soyez sp\u00e9cifique dans vos requ\u00eates. Par exemple \u201cReact hooks\u201d va chercher plein de trucs inutiles, alors que \u201cReact 19 nouveaux hooks useActionState\u201d sera plus efficace.<\/p>\n<p>On peut vraiment r\u00e9duire la quantit\u00e9 de requ\u00eates en \u00e9tant malin sur le prompt engineering. Par exemple, au lieu de laisser le mod\u00e8le chercher tout seul, guidez-le : \u201c<em>V\u00e9rifie uniquement sur la doc officielle de React<\/em>\u201d plut\u00f4t que \u201c<em>Cherche des infos sur React<\/em>\u201d.<\/p>\n<p>Et comme Ollama supporte MCP Server, Cline, Codex et Goose, c\u2019est royal car vous pouvez aussi brancher votre assistant IA directement dans votre IDE, Slack, ou Discord. H\u00e9 oui, vous allez enfin pouvoir coder un bot Discord qui va fact-checker automatiquement les affirmations douteuses et foireuses de vos coll\u00e8gues. Le r\u00eave !<\/p>\n<p>Pour aller plus loin, vous pouvez aussi combiner la recherche web avec le fetching de pages sp\u00e9cifiques. L\u2019API <code>web_fetch<\/code> permet ainsi de r\u00e9cup\u00e9rer le contenu d\u2019une URL pr\u00e9cise. Pratique pour analyser en profondeur une doc ou un article :<\/p>\n<div class=\"highlight\">\n<pre class=\"chroma\"><code class=\"language-fallback\" data-lang=\"fallback\"><span class=\"line\"><span class=\"cl\">from ollama import web_search, web_fetch, chat\n<\/span><\/span><span class=\"line\"><span class=\"cl\">\n<\/span><\/span><span class=\"line\"><span class=\"cl\"># 1. Recherche d'articles pertinents\n<\/span><\/span><span class=\"line\"><span class=\"cl\">search_results = web_search(\"React 19 vs Vue 3 comparison 2025\")\n<\/span><\/span><span class=\"line\"><span class=\"cl\">top_url = search_results.results[0]['url'] # ou .url selon le type\n<\/span><\/span><span class=\"line\"><span class=\"cl\">print(f\"\ud83d\udcf0 Article trouv\u00e9: {search_results.results[0]['title']}\")\n<\/span><\/span><span class=\"line\"><span class=\"cl\">\n<\/span><\/span><span class=\"line\"><span class=\"cl\"># 2. R\u00e9cup\u00e9ration du contenu complet de la page\n<\/span><\/span><span class=\"line\"><span class=\"cl\">page_content = web_fetch(top_url)\n<\/span><\/span><span class=\"line\"><span class=\"cl\">print(f\"\ud83d\udcc4 {len(page_content.content)} caract\u00e8res r\u00e9cup\u00e9r\u00e9s\")\n<\/span><\/span><span class=\"line\"><span class=\"cl\">\n<\/span><\/span><span class=\"line\"><span class=\"cl\"># 3. Analyse approfondie du contenu\n<\/span><\/span><span class=\"line\"><span class=\"cl\">response = chat(\n<\/span><\/span><span class=\"line\"><span class=\"cl\"> model=\"qwen3:4b\", # ou \"gpt-oss\" si disponible\n<\/span><\/span><span class=\"line\"><span class=\"cl\"> messages=[{\n<\/span><\/span><span class=\"line\"><span class=\"cl\"> \"role\": \"user\",\n<\/span><\/span><span class=\"line\"><span class=\"cl\"> \"content\": f\"\"\"\n<\/span><\/span><span class=\"line\"><span class=\"cl\"> Analyse cette comparaison technique:\n<\/span><\/span><span class=\"line\"><span class=\"cl\"> {page_content.content[:4000]}\n<\/span><\/span><span class=\"line\"><span class=\"cl\">\n<\/span><\/span><span class=\"line\"><span class=\"cl\"> Donne-moi:\n<\/span><\/span><span class=\"line\"><span class=\"cl\"> 1. Les points cl\u00e9s de chaque framework\n<\/span><\/span><span class=\"line\"><span class=\"cl\"> 2. Le gagnant selon l'article\n<\/span><\/span><span class=\"line\"><span class=\"cl\"> 3. Les cas d'usage recommand\u00e9s\n<\/span><\/span><span class=\"line\"><span class=\"cl\"> \"\"\"\n<\/span><\/span><span class=\"line\"><span class=\"cl\"> }]\n<\/span><\/span><span class=\"line\"><span class=\"cl\">)\n<\/span><\/span><span class=\"line\"><span class=\"cl\">\n<\/span><\/span><span class=\"line\"><span class=\"cl\">print(f\"n\ud83d\udd0d Analyse:n{response.message.content}\")\n<\/span><\/span><\/code><\/pre>\n<p>Alors bien s\u00fbr, des fois la recherche retournera des trucs pas pertinents, surtout si votre requ\u00eate est vague et de son c\u00f4t\u00e9, le mod\u00e8le peut aussi mal interpr\u00e9ter les r\u00e9sultats s\u2019il est trop petit. Mais bon, compar\u00e9 \u00e0 une IA qui vous sort que Windows 11 n\u2019existe pas encore, on a fait quand m\u00eame pas mal de chemin, vous ne trouvez pas ??<\/p>\n<p>J\u2019esp\u00e8re qu\u2019\u00e0 terme, Ollama ajoutera aussi le support de sources personnalis\u00e9es car ce serait vraiment cool de pouvoir indexer par exemple sa propre doc ou ses propres emails pour y faire des recherches\u2026 Mais bon, en attendant cette nouvelle API permet enfin de contrebalancer ce probl\u00e8me des mod\u00e8les pas \u00e0 jour en terme de connaissances, et \u00e7a c\u2019est d\u00e9j\u00e0 \u00e9norme !<\/p>\n<p>A vous de jouer maintenant !<\/p>\n<p>\n<a href=\"https:\/\/ollama.com\/blog\/web-search\">Source<\/a>\n<\/p>\n<\/div>\n<\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>Ce qui est super relou avec les IA qu\u2019on peut utiliser en local, genre avec Ollama, c\u2019est que si on lui demande des infos un peu trop r\u00e9cente, \u00e7a nous sort des vieux chiffres de 2023 avec la confiance d\u2019un vendeur de voitures d\u2019occasion. Bon bah \u00e7a, c\u2019est fini puisqu\u2019 Ollama vient de sortir une API de recherche web qui permet enfin \u00e0 vos mod\u00e8les locaux d\u2019acc\u00e9der \u00e0 des infos fra\u00eeches dispo sur le net. Woohoo o\/ ! Baptis\u00e9e Ollama Web Search, cette API REST permet donc \u00e0 vos mod\u00e8les de faire des recherches sur le web en temps r\u00e9el comme \u00e7a plus besoin de se contenter des donn\u00e9es d\u2019entra\u00eenement fig\u00e9es dans le temps. Selon la doc officielle , l\u2019API fournit \u201cles derni\u00e8res informations du web pour r\u00e9duire les hallucinations et am\u00e9liorer la pr\u00e9cision\u201d. En gros, votre IA locale devient aussi \u00e0 jour que ChatGPT, mais sans envoyer vos donn\u00e9es perso \u00e0 OpenAI. Les mod\u00e8les compatibles avec cette nouvelle fonctionnalit\u00e9 incluent qwen3, LLama, gpt-oss (la version open source d\u2019OpenAI), deepseek-v3.1, et plein d\u2019autres. Et d\u2019apr\u00e8s les premiers tests de la communaut\u00e9 , qwen3 et gpt-oss sont m\u00eame plut\u00f4t dou\u00e9s pour exploiter cette fonctionnalit\u00e9. Le mod\u00e8le comprend qu\u2019il lui manque une info, fait sa recherche, analyse les r\u00e9sultats et nous sort une r\u00e9ponse document\u00e9e ! C\u2019est trop incrrrr ! Vous allez pouvoir booster vos scripts \/ bots \/ outils d\u2019IA locale pour qu\u2019ils puissent surveiller des choses dispo en ligne, les comparer, g\u00e9n\u00e9rer des r\u00e9sum\u00e9s \u00e0 partir de sites web, fact checker ou compl\u00e9ter des infos\u2026etc. Mais alors comment s\u2019en servir ? Bon, on est vendredi soir et j\u2019ai la flemme de tourner un tuto vid\u00e9o, donc m\u00eame si je risque de d\u00e9tailler tout \u00e7a bient\u00f4t \u00e0 mes Patreons d\u2019amour , voici quand m\u00eame quelques explications. D\u2019abord, il faut cr\u00e9er une cl\u00e9 API Ollama . La doc explique que vous avez un essai gratuit g\u00e9n\u00e9reux pour commencer, mais s\u2019il vous en faut plus, il faudra prendre un petit abonnement Ollama Cloud \u2026 Une fois votre cl\u00e9 en poche, exportez-la dans votre environnement comme ceci : export OLLAMA_API_KEY=&#8221;votre_cl\u00e9_ici&#8221; Le plus simple ensuite pour tester, c\u2019est avec curl : curl https:\/\/ollama.com\/api\/web_search &#8211;header &#8220;Authorization: Bearer $OLLAMA_API_KEY&#8221; -d &#8216;{ &#8220;query&#8221;: &#8220;derni\u00e8res vuln\u00e9rabilit\u00e9s CVE janvier 2025&#8221; }&#8217; Mais bon, soyons honn\u00eates, on va plut\u00f4t utiliser Python car c\u2019est quand m\u00eame plus cool \ud83d\ude09 . Voici donc un exemple de script basique qui compare une r\u00e9ponse avec et sans recherche web : import ollama from ollama import chat, web_search, web_fetch model = &#8220;qwen3:4b&#8221; # 1. Sans recherche web response_classic = chat( # pas ollama.chat model=model, messages=[{ &#8220;role&#8221;: &#8220;user&#8221;, &#8220;content&#8221;: &#8220;Quelles sont les features de React 19?&#8221; }] ) print(&#8220;Sans recherche web:&#8221;, response_classic.message.content[:500]) # .message.content # 2. Avec recherche web search_results = web_search(&#8220;React 19 features derni\u00e8res nouveaut\u00e9s&#8221;) print(&#8220;R\u00e9sultats:&#8221;, search_results) # 3. Avec outils available_tools = {&#8216;web_search&#8217;: web_search, &#8216;web_fetch&#8217;: web_fetch} messages = [{ &#8220;role&#8221;: &#8220;user&#8221;, &#8220;content&#8221;: &#8220;Utilise la recherche web pour me dire les derni\u00e8res features de React 19&#8243; }] response_with_tools = chat( model=model, messages=messages, tools=[web_search, web_fetch], think=True ) # Acc\u00e8s aux tool_calls if response_with_tools.message.tool_calls: for tool_call in response_with_tools.message.tool_calls: function_to_call = available_tools.get(tool_call.function.name) if function_to_call: args = tool_call.function.arguments result = function_to_call(**args) print(f&#8221;Outil utilis\u00e9: {tool_call.function.name}&#8221;) print(f&#8221;R\u00e9sultat: {str(result)[:500]}&#8230;&#8221;) print(&#8220;R\u00e9ponse finale:&#8221;, response_with_tools.message.content) Les performances varient ensuite selon les mod\u00e8les. Qwen3:4b est parfait pour du temps r\u00e9el avec environ 85 tokens\/seconde. GPT-OSS:120b est plus lent mais donne des r\u00e9sultats de qualit\u00e9 id\u00e9aux pour de la production. Pour du dev local, je vous recommande qwen3:8b, c\u2019est le bon compromis entre vitesse et intelligence. Le truc cool, c\u2019est que vous pouvez maintenant cr\u00e9er des agents sp\u00e9cialis\u00e9s. Genre un agent DevOps qui surveille les CVE de vos d\u00e9pendances, un agent Marketing qui analyse les tendances de votre secteur, ou un agent Support qui maintient une base de connaissances \u00e0 jour. Voici un exemple : import ollama from ollama import chat, web_search class SecurityAgent: def __init__(self): self.model = &#8220;qwen3:4b&#8221; def check_vulnerabilities(self, technologies): rapport = &#8220;\ud83d\udee1\ufe0f RAPPORT S\u00c9CURIT\u00c9nn&#8221; for tech in technologies: # Recherche directe des CVE r\u00e9centes results = web_search(f&#8221;{tech} CVE vulnerabilities 2025 critical&#8221;) # Demande au mod\u00e8le d&#8217;analyser response = chat( model=self.model, messages=[{ &#8220;role&#8221;: &#8220;user&#8221;, &#8220;content&#8221;: f&#8221;R\u00e9sume les vuln\u00e9rabilit\u00e9s critiques de {tech}: {results}&#8221; }] ) rapport += f&#8221;### {tech}n{response.message.content}nn&#8221; return rapport # Utilisation agent = SecurityAgent() rapport = agent.check_vulnerabilities([&#8220;Node.js&#8221;, &#8220;PostgreSQL&#8221;, &#8220;Docker&#8221;]) print(rapport) Maintenant, pour optimiser un peu tout \u00e7a et ne pas flamber votre quota API, voici quelques astuces assez classiques\u2026 D\u2019abord, mettez en cache les r\u00e9sultats. Ensuite, soyez sp\u00e9cifique dans vos requ\u00eates. Par exemple \u201cReact hooks\u201d va chercher plein de trucs inutiles, alors que \u201cReact 19 nouveaux hooks useActionState\u201d sera plus efficace. On peut vraiment r\u00e9duire la quantit\u00e9 de requ\u00eates en \u00e9tant malin sur le prompt engineering. Par exemple, au lieu de laisser le mod\u00e8le chercher tout seul, guidez-le : \u201cV\u00e9rifie uniquement sur la doc officielle de React\u201d plut\u00f4t que \u201cCherche des infos sur React\u201d. Et comme Ollama supporte MCP Server, Cline, Codex et Goose, c\u2019est royal car vous pouvez aussi brancher votre assistant IA directement dans votre IDE, Slack, ou Discord. H\u00e9 oui, vous allez enfin pouvoir coder un bot Discord qui va fact-checker automatiquement les affirmations douteuses et foireuses de vos coll\u00e8gues. Le r\u00eave ! Pour aller plus loin, vous pouvez aussi combiner la recherche web avec le fetching de pages sp\u00e9cifiques. L\u2019API web_fetch permet ainsi de r\u00e9cup\u00e9rer le contenu d\u2019une URL pr\u00e9cise. Pratique pour analyser en profondeur une doc ou un article : from ollama import web_search, web_fetch, chat # 1. Recherche d&#8217;articles pertinents search_results = web_search(&#8220;React 19 vs Vue 3 comparison 2025&#8243;) top_url = search_results.results[0][&#8216;url&#8217;] # ou .url selon le type print(f&#8221;\ud83d\udcf0 Article trouv\u00e9: {search_results.results[0][&#8216;title&#8217;]}&#8221;) # 2. R\u00e9cup\u00e9ration du contenu complet de la page page_content = web_fetch(top_url) print(f&#8221;\ud83d\udcc4 {len(page_content.content)} caract\u00e8res r\u00e9cup\u00e9r\u00e9s&#8221;) # 3. Analyse approfondie du contenu response = chat( model=&#8221;qwen3:4b&#8221;, # ou &#8220;gpt-oss&#8221; si disponible messages=[{ &#8220;role&#8221;: &#8220;user&#8221;, &#8220;content&#8221;: f&#8221;&#8221;&#8221; Analyse cette comparaison technique: {page_content.content[:4000]} Donne-moi: 1. Les points cl\u00e9s de chaque framework 2. Le gagnant selon l&#8217;article 3. Les cas d&#8217;usage recommand\u00e9s &#8220;&#8221;&#8221; }] ) print(f&#8221;n\ud83d\udd0d Analyse:n{response.message.content}&#8221;) Alors bien s\u00fbr, des fois la recherche retournera des trucs pas pertinents,<\/p>\n","protected":false},"author":1,"featured_media":1219,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"give_campaign_id":0,"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"_kadence_starter_templates_imported_post":false,"footnotes":""},"class_list":["post-1218","page","type-page","status-publish","has-post-thumbnail","hentry"],"campaignId":"","_links":{"self":[{"href":"https:\/\/elearningsamba.com\/index.php\/wp-json\/wp\/v2\/pages\/1218","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/elearningsamba.com\/index.php\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/elearningsamba.com\/index.php\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/elearningsamba.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/elearningsamba.com\/index.php\/wp-json\/wp\/v2\/comments?post=1218"}],"version-history":[{"count":0,"href":"https:\/\/elearningsamba.com\/index.php\/wp-json\/wp\/v2\/pages\/1218\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/elearningsamba.com\/index.php\/wp-json\/wp\/v2\/media\/1219"}],"wp:attachment":[{"href":"https:\/\/elearningsamba.com\/index.php\/wp-json\/wp\/v2\/media?parent=1218"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}