Monday, December 29, 2025

New top story on Hacker News: Show HN: Per-instance TSP Solver with No Pre-training (1.66% gap on d1291)

Show HN: Per-instance TSP Solver with No Pre-training (1.66% gap on d1291)
5 by jivaprime | 0 comments on Hacker News.
OP here. Most Deep Learning approaches for TSP rely on pre-training with large-scale datasets. I wanted to see if a solver could learn "on the fly" for a specific instance without any priors from other problems. I built a solver using PPO that learns from scratch per instance. It achieved a 1.66% gap on TSPLIB d1291 in about 5.6 hours on a single A100. The Core Idea: My hypothesis was that while optimal solutions are mostly composed of 'minimum edges' (nearest neighbors), the actual difficulty comes from a small number of 'exception edges' outside of that local scope. Instead of pre-training, I designed an inductive bias based on the topological/geometric structure of these exception edges. The agent receives guides on which edges are likely promising based on micro/macro structures, and PPO fills in the gaps through trial and error. It is interesting to see RL reach this level without a dataset. I have open-sourced the code and a Colab notebook for anyone who wants to verify the results or tinker with the 'exception edge' hypothesis. Code & Colab: https://ift.tt/BNY5jbS Happy to answer any questions about the geometric priors or the PPO implementation!

New top story on Hacker News: The production bug that made me care about undefined behavior

The production bug that made me care about undefined behavior
3 by birdculture | 0 comments on Hacker News.


Sunday, December 28, 2025

Thursday, December 25, 2025

New top story on Hacker News: UBlockOrigin and UBlacklist AI Blocklist

UBlockOrigin and UBlacklist AI Blocklist
12 by _____k | 0 comments on Hacker News.


New top story on Hacker News: URL Pattern API

URL Pattern API
8 by thunderbong | 2 comments on Hacker News.


New top story on Hacker News: Show HN: Lamp Carousel – DIY kinetic sculpture powered by lamp heat

Show HN: Lamp Carousel – DIY kinetic sculpture powered by lamp heat
17 by Evidlo | 0 comments on Hacker News.
I wanted to share this fun craft activity for the holidays that I've been doing with my family over the last few years. I came up with these while cutting up some cans trying to make an aluminum version of paper spinners. There are a variety of shapes that work, but generally bigger+lighter spinners are better. Also incandescent bulbs are the best, but LEDs work too. They remind me of candle carousels I would see at my grandparents' house during Christmas. Let me know what you think!

Saturday, December 20, 2025

New top story on Hacker News: Show HN: HN Wrapped 2025 - an LLM reviews your year on HN

Show HN: HN Wrapped 2025 - an LLM reviews your year on HN
10 by hubraumhugo | 3 comments on Hacker News.
I was looking for some fun project to play around with the latest Gemini models and ended up building this :) Enter your username and get: - Generated roasts and stats based on your HN activity 2025 - Your personalized HN front page from 2035 (inspired by a recent Show HN [0]) - An xkcd-style comic of your HN persona It uses the latest gemini-3-flash and gemini-3-pro-image (nano banana pro) models, which deliver pretty impressive and funny results. A few examples: - dang: https://ift.tt/v547sHo - myself: https://ift.tt/7E0XYJb Give it a try and share yours :) Happy holidays! [0] https://ift.tt/4LUi8DR

Friday, December 19, 2025

New top story on Hacker News: Show HN: Linggen – A local-first memory layer for your AI (Cursor, Zed, Claude)

Show HN: Linggen – A local-first memory layer for your AI (Cursor, Zed, Claude)
2 by linggen | 1 comments on Hacker News.
Hi HN, Working with multiple projects, I got tired of re-explaining our complex multi-node system to LLMs. Documentation helped, but plain text is hard to search without indexing and doesn't work across projects. I built Linggen to solve this. My Workflow: I use the Linggen VS Code extension to "init my day." It calls the Linggen MCP to load memory instantly. Linggen indexes all my docs like it’s remembering them—it is awesome. One click loads the full architectural context, removing the "cold start" problem. The Tech: Local-First: Rust + LanceDB. Code and embeddings stay on your machine. No accounts required. Team Memory: Index knowledge so teammates' LLMs get context automatically. Visual Map: See file dependencies and refactor "blast radius." MCP-Native: Supports Cursor, Zed, and Claude Desktop. Linggen saves me hours. I’d love to hear how you manage complex system context! Repo: https://ift.tt/QHqnPs5 Website: https://linggen.dev

Thursday, December 18, 2025

New top story on Hacker News: Show HN: Paper2Any – Open tool to generate editable PPTs from research papers

Show HN: Paper2Any – Open tool to generate editable PPTs from research papers
5 by Mey0320 | 0 comments on Hacker News.
Hi HN, We are the OpenDCAI group from Peking University. We built Paper2Any, an open-source tool designed to automate the "Paper to Slides" workflow based on our DataFlow-Agent framework. The Problem: Writing papers is hard, but creating professional architecture diagrams and slides (PPTs) is often more tedious. Most AI tools just generate static images (PNGs) that are impossible to tweak for final publication. The Solution: Paper2Any takes a PDF, text, or sketch as input, understands the research logic, and generates fully editable PPTX (PowerPoint) files and SVGs. We prioritize flexibility and fidelity—allowing you to specify page ranges, switch visual styles, and preserve original assets. How it works: 1. Multimodal Reading: Extracts text and visual elements from the paper. You can now specify page ranges (e.g., Method section only) to focus the context and reduce token usage. 2. Content Understanding: Identifies core contributions and structural logic. 3. PPT Generation: Instead of generating one flat image, it generates independent elements (blocks, arrows, text) with selectable visual styles and organizes them into a slide layout. Links: - Demo: http://dcai-paper2any.cpolar.top/ - Code (DataFlow-Agent): https://ift.tt/nZwmeRz We'd love to hear your feedback on the generation quality and the agent workflow!

Thursday, December 11, 2025

New top story on Hacker News: Show HN: SIM – Apache-2.0 n8n alternative

Show HN: SIM – Apache-2.0 n8n alternative
16 by waleedlatif1 | 0 comments on Hacker News.
Hey HN, Waleed here. We're building Sim ( https://sim.ai/ ), an open-source visual editor to build agentic workflows. Repo here: https://ift.tt/BzihyYR . Docs here: https://docs.sim.ai . You can run Sim locally using Docker, with no execution limits or other restrictions. We started building Sim almost a year ago after repeatedly troubleshooting why our agents failed in production. Code-first frameworks felt hard to debug because of implicit control flow, and workflow platforms added more overhead than they removed. We wanted granular control and easy observability without piecing everything together ourselves. We launched Sim [1][2] as a drag-and-drop canvas around 6 months ago. Since then, we've added: - 138 blocks: Slack, GitHub, Linear, Notion, Supabase, SSH, TTS, SFTP, MongoDB, S3, Pinecone, ... - Tool calling with granular control: forced, auto - Agent memory: conversation memory with sliding window support (by last n messages or tokens) - Trace spans: detailed logging and observability for nested workflows and tool calling - Native RAG: upload documents, we chunk, embed with pgvector, and expose vector search to agents - Workflow deployment versioning with rollbacks - MCP support, Human-in-the-loop block - Copilot to build workflows using natural language (just shipped a new version that also acts as a superagent and can call into any of your connected services directly, not just build workflows) Under the hood, the workflow is a DAG with concurrent execution by default. Nodes run as soon as their dependencies (upstream blocks) are satisfied. Loops (for, forEach, while, do-while) and parallel fan-out/join are also first-class primitives. Agent blocks are pass-through to the provider. You pick your model (OpenAI, Anthropic, Gemini, Ollama, vLLM), and and we pass through prompts, tools, and response format directly to the provider API. We normalize response shapes for block interoperability, but we're not adding layers that obscure what's happening. We're currently working on our own MCP server and the ability to deploy workflows as MCP servers. Would love to hear your thoughts and where we should take it next :) [1] https://ift.tt/nkIgdwm [2] https://ift.tt/jBd3CKQ

Tuesday, December 2, 2025