sitkack 3 days ago

embedding search via https://searchthearxiv.com/ takes either a word vector, or an abs or pdf link to an arxiv paper.

https://news.ycombinator.com/item?id=42519487

I just did a spot check, I think searchthearxiv search results are superior.

  • 0101111101 3 days ago

    Looks cool! You can input either a search query or a paper URL on arxiv xplorer. You can even combine paper URLs to search for combinations of ideas by putting + or - before the URL, like `+ 2501.12948 + 1712.01815`

    • sitkack 3 days ago

      That is neat I like that.

      It would be cool if the "More Like This" had a + button that would append the arxiv id to the search query.

      • 0101111101 3 days ago

        That's a nice idea! Might take a look this weekend!

  • masterjack 3 days ago

    There’s also the search and browsing on https://sugaku.net, it’s more focused on math but does also have all of the arxiv on it

nblgbg 3 days ago

Just curious, are there any techniques other than using embeddings, computing cosine similarity, and sorting the results based on that? RRF could be used but again its very simple as well.

  • forrestp 3 days ago

    My understanding is that your levers are roughly better / more diverse embeddings or computing more embeddings (embed chunks / groups / etc) + aggregating more cosine similarities / scores. More flops = better search w/ steep diminishing returns

    Colbert being a good google-able application of utilizing more embeddings.

    Search ends up often being a funnel of techniques. Cheap and high recall for phase 1 and ratchet up the flops and precision in subsequent passes on the previous result set.

    • 0101111101 3 days ago

      Exactly! A near property of the matryoshka embeddings is that you can compute a low dimension embedding similarity really fast and then refine afterwards.

elliotec 4 days ago

This is really cool, and very relevant to something I'm working on. Would you be willing to do a quick explanation of the build?

  • 0101111101 3 days ago

    Sure! I first used openai embeddings on all the paper titles, abstracts and authors. When a user submits a search query, I embed the query, find the closest matching papers and return those results. Nothing too fancy involved!

    I'm also maintaining a dataset of all the embeddings on kaggle if you want to use them yourself: https://www.kaggle.com/datasets/tomtum/openai-arxiv-embeddin...

    • heisenburgzero 3 days ago

      So did you just combine Title+Abstracts+Authors into a single chunk and embed them or embedded them individually?

      • synctext 3 days ago

        Impressive! Will you parse the papers in the future? Without citations this is not that usable for professors or scientists in general. The relevance ranking largely depends on showing these older, prominent papers. (from our lab experience building decentralised search using transformers)

      • 0101111101 3 days ago

        One chunk embedded together

    • cluckindan 3 days ago

      That method can break when author names and subject matter collide.

      • 0101111101 3 days ago

        True, but similarly if your embeddings are any good they'll capture interesting associations between authors, topics and your search query. If you find any interesting author overlap results I'd be very interested!

bbor 3 days ago

Oh god, there's a medrxiv?? TIL...

Don't forget chemrXiv!