mentalgear 20 hours ago

I applaud the effort for the local (lo-fi) space ! Yet, reading over the example linked in the docs (which does not seem cheery-picked, kudos for that!), my impression is that the document is a rather messy outcome [1].

I think what's missing is one (or more) step in-between, possible a graph database (eg[2]), which the LLM can place all it's information in, see relevant interconnections, query to question itself, and then generate the final report.

(maybe the final report could be an interactive HTML file that the user can ask questions, or edit themselves).

There's also a similar open-deep research tool called onyx [2], with I think has better UI/UX albeit not local. Maybe the author could consider porting this to local instead of rolling and maintaining another deep-research tool themselves ?

I'm saying this, not because I think it's not a good project, but because there are a ton of open deep-research projects which I'm afraid will just fizzle out, and would be better if people would join forces working on those aspects they care most about (e.g. local aspect, or RAG strategies, etc) .

[1] https://github.com/LearningCircuit/local-deep-research/blob/...

[2] "In-Browser Graph RAG with Kuzu-WASM and WebLLM" https://news.ycombinator.com/item?id=43321523

[3] https://github.com/onyx-dot-app/onyx

  • TeMPOraL 10 hours ago

    > I think what's missing is one (or more) step in-between, possible a graph database (eg[2]), which the LLM can place all it's information in, see relevant interconnections, query to question itself, and then generate the final report.

    Quickly, productize this (and call it DeepRAG, or DERP) before it explodes in late 2025 - you may just beat the market to it!

    See: https://news.ycombinator.com/item?id=43267539

jeffreyw128 14 hours ago

This is cool!

If you want to add embeddings over internet as a source, you should try out exa.ai. Includes: wikipedia, tens of thousands of news feeds, Github, 70M+ papers including all of arxiv, etc.

disclaimer: I am one of the founders (:

  • learningcircuit 9 hours ago

    I will add it. Its very easy to integrate new search engines.

  • nhggfu 10 hours ago

    looks siiiick. congrats + good luck

CGamesPlay 6 hours ago

I tried this out, but I hit so many errors that I could never generate a report. There is no way to resume a failed generation, so it seems like if any API call fails, even 10 minutes into the generation, you have to start over from scratch.

learningcircuit a day ago
  • sinenomine a day ago

    You could be the first if you were to develop an eval (preferably automated with llm as judge) and compared local deep research with perplexity's, openai's and deepseek's implementations on high-information questions.

    • learningcircuit 20 hours ago

      How do they evaluate the quality of the report? It's one of the most important things for me.

      • mentalgear 17 hours ago

        Given a benchmark corpus, the evaluation criteria could be:

        - Facts extracted: the amount of relevant facts extracted from the corpus

        - Interpretations : based on the facts, % of correct interpretations made

        - Correct Predictions: based on the above, % of correct extrapolations / interpolations / predictions made

        The ground truth could be in JSON file per example. (If the solution you want to benchmark uses a graph db, you could compare these aspects with a LLM as judge.)

        ---

        The actual writing is more about formal/business/academic style, and I find less relevant for a benchmark.

        However I would find it crucial to run a "reverse RAG" over the generated report to ensure each claim has a source. [0]

        [0] https://venturebeat.com/ai/mayo-clinic-secret-weapon-against...

bravura 17 hours ago

For web search, also consider the Kagi and Tavily APIs.

throwaway24681 17 hours ago

Looks very cool. How does this compare to the RAG features provided by open-webui?

There is web search and a way to embed documents, but so far it seems like the results are subpar as details are lost in embeddings. Is this much better?

  • learningcircuit 15 hours ago

    Give me a question and I can give you the output? So you can compare.

    • throwaway24681 14 hours ago

      I tried it myself. It looks like this can do a lot more than open-webui's web search in terms of detail, which sounds useful, thanks for making it open source.

      It seems to have a weird behavior of specifying a date when I didn't ask for it, is this expected? Also, I feel like searching "questions" is not optimal for most search engines, and it should instead search in terms of keywords.

      Also, I wish there can be a more informative log at a slightly higher level - I don't need to see every request being made, but I do want to see a summary of what's happening at each step, like the prompt used, result, and the new search being done.

      On another note, for local models, reasoning models have significant advantages over non-reasoning models. Can they be used for this?

      • learningcircuit 12 hours ago

        Very good ideas I will try to include them.

        Thinking models... You can use them. In fact I started the project with them but not sure they help too much for this task. They definitely make it slower

wahnfrieden 20 hours ago

Is anyone using (local) LLMs to directly search for (by scanning over) relevant materials from a corpus rather than relying on vector search?

  • suprjami 19 hours ago

    Generally this fails.

    Most LLMs lose the ability to track facts over about 20k words of content, the best can manage maybe 40k words.

    Look for "needle" benchmark tests, as in needle-in-haystack.

    Not to mention the memory requirements of such a huge context like 128k or 1M tokens. Only people with enterprise servers at home could run that locally.

    • learningcircuit 18 hours ago

      Very good answer. It is very hard with small LLM.

    • wahnfrieden 7 hours ago

      What about scanning over chunks of data to collect matches iteratively - that’s what I meant rather than loading full context limits

alchemist1e9 18 hours ago

Nice work!

I’ve been thinking recently that a local collection of pre-processed for RAG using curated focused structured information might be a good complement to this dynamic searching approach.

I see this used LangChain, might be worth checking into txtai.

https://neuml.github.io/txtai/examples/

ein0p 12 hours ago

Is there some kind of a tool which would provide AI search experience _and mix in the contents from my bookmarks_ (that is, fetch/cache/index/RAG the contents of pages those bookmarks point to) when creating the report? Bookmarking is an useless dumpster fire right now. This could make it useful again.

Currently the failure mode I see quite often in e.g. OpenAIs deep research is it sources its answer from an obviously low-authority source and provides a reference to that as if it's a scientific journal. The answer gets screwed up by that as well, because such sources rarely contain anything of value, and even if other sources are high quality, low quality source(s) mess everything up.

Emphasizing the content I've already curated (via bookmarks) could significantly boost the SNR.

  • learningcircuit 9 hours ago

    If you have PDF collection you could include it into the local search and give it very high relevance?

    • ein0p 8 hours ago

      I don't care what form it takes, all I care is that curation of my knowledge base is as easy as managing a set of bookmarks.

antonkar 16 hours ago

I think the guy who’ll make the 3D game-like GUI for LLMs is the next Jobs/Gates/Musk and Nobel Prize Winner (I think it’ll solve alignment by having millions of eyes on the internals of LLMs), because computers became popular only after the OS with a GUI appeared, current chatbots are a bit like a command line in comparison. I just started ASK HN to let people and me share their AI safety ideas, both crazy and not: https://news.ycombinator.com/item?id=43332593

  • tecleandor 13 hours ago

    You just posted the same comment three times in three different posts in 10 minutes. I'd say it would be nice to take it a bit slower...

    • antonkar 12 hours ago

      Yep, it’s a bit different, I won’t do it again. The problem is important, I wanted to hear other people’s ideas, you can google “share AI safety ideas”, I posted the same question in a bunch of places and it created some discussions