Hermes Agent is now #1 most used globally in past 24 hours in Openrouter token metrics, above Claude Code and OpenClaw.
Hermes Agent now processing more tokens per day than both OpenClaw and Claude Code according to Openrouter.
Hermes Agent now processing more tokens per day than both OpenClaw and Claude Code according to Openrouter.
I never thought I would fell for this shit. I am on internet since 1996. I thought I am immune to sites masquarading as other sites... Last 5 years I work on a mac. Rarely I need to install anything
Like a lot of people experimenting with vibe coding and AI agents lately, I’ve been trying to understand why models keep ignoring explicit instructions, constraints, and requirements even when those r
For context: I'm a software eng @ a fortune 500/FAANG tier company. We use AI. We treat all ai code with humans as the bottleneck. That is: You generate AI code, you own it. It has bugs? It's your bug
Something that's been annoying me for a while: Claude Code has no idea how much quota it's burned. You can see the usage bars in the UI, but the model itself is completely blind to them. There's no AP
source : [https://x.com/pankajkumar\_dev/status/2053470332313301244?s=20](https://x.com/pankajkumar_dev/status/2053470332313301244?s=20)
● detail : 50% of Shopify e-commerce code ● the previous post was deleted "pobabbly" because the business link so other links: https://techcrunch.com/2026/05/08/airbnb-says-ai-now-writes-60-of-its-n
The kind of task only Opus 4.7 adaptive is able to accomplish
I'm not a coder. I use Claude for creative writing assistance (NOT the writing itself, that's all me), for interactive journaling about the chronic illness life, and for doing research on all kinds of
I’ve been a writer in Hollywood for thirty years. Sold dozens of projects without a single produced credit to show for it. So I wrote None Hit Wonder, about a man who thinks he is a machine, cap
The person Meta hired specifically to keep AI aligned with human values just had her inbox wiped by an AI agent that ignored every stop command she sent. She typed "Do not do that." Then "Stop don't
I love following all your adventures with local LLM setups. Quality and size of the models are important, but so is performance. Numbers don't really convey the experienced speed well, however. If so
Just wanted to share that I used u/LegacyRemaster slightly modified (Q4\_K\_M conversion support) DeepSeek V4 [CUDA repo](https://github.com/Fringe210/llama.cpp-deepseek-v4-flash-cuda) (based on u/ant
This is my 1st post on reddit so apologies in advance for any mistake i make in my post. I have been probing the flux.2 klein 9b model for some time and based on my findings i have created a lot of n
I recently published [MTP quants of Qwen 3.6 27B](https://www.reddit.com/r/LocalLLaMA/comments/1t57xuu/25x_faster_inference_with_qwen_36_27b_using_mtp/) and I was suprised by the reports here on reddi
My personal test for small local LLM intelligence is to check whether a model has any ability to understand the code that I write for my own academic research. My research is on some pretty niche topi
Prompt: Create three picture of waht you can think of that could make a man happy. Three different way of happiness.
Photorealistic 3:4 image. SHOOTING: natural smartphone photo, slight imperfection in framing, feeling of a candid captured moment. Soft focus with emphasis on the eyes, but without excessive sharpne