Pieces is (correction: used to be, prior to the AI slopification) an app for storing code snippets. so i think you can imagine the general idea of, e.g., "cool API usage example from a YouTube video, let me screenshot it!"
> To best support software engineers when they want to transcribe code from images, we fine-tuned our pre-processing pipeline to screenshots of code in IDEs, terminals, and online resources like YouTube videos and blog posts.
Even with these examples that seems like a very narrow use case.
It worries me that stuff like that becoming easier will lead to wacky data pipelines being normalized (pulling display output off systems and "scraping" it to get data, of dubious quality, versus just building a proper interface). The kind of crowd that likes "low code" tools like MSFT's "Power Automate" is going to love to make Rube Goldberg nightmares out of tools like this.
It fills me with a deep sadness that we created deterministic machines then, though laziness, exploit every opportunity to "contaminate" them with sloppy practices that make them produce output with the same fuzzy inaccuracy as human brains.
Old man yells a neural networks take: We're entering a "The Machine Stops" era where nobody is going to know how to formulate basic algorithms.
"We need to add some numbers. Let's point a camera at the input, OCR it, then feed it to an LLM that 'knows math'. Then we don't have to figure out an algorithm to add numbers."
I wish compute "cost" more so people would be forced to actually make efficient use of hardware. Sadly, I think it'll take mass societal and infrastructure collapse for that to happen. Until it does, though, let the excess compute flow freely!
has anyone tried feeding the admittedly noisy OCR-ed text -at a document level - to an LLM for making sense? Presumably some of the less capable ones should be quite affordable and accurate at scale as well.
Quite simply, you’re completely wrong. Modern tesseract versions include a modern LSTM AI. It can very affordably be deployed on CPU, yet its performance is competitive with much more expensive large GPU-based models. Especially if you handle a high volume of scans, chances are that tesseract will have the best bang per buck.
My company probably spent close to 6 figures overall creating Tesseract 5 custom models for various languages. Surya beats them all and is open source (and quite faster).
Surya weights for the models are licensed cc-by-nc-sa-4.0. They have an exception for small companies. If you're company is not small you either need to pay them or use them illegally.
Their training code and data is closed source. They are barely open weight and only inference is open source.
5.5.0 released November last year. Still a very active project as far as I can tell and runs on CPU. Even compared to best open source GPU option it is still pretty good. VLMs work very differently and don't work as well for everything. Why is it out of date?
Surya weights for the models are licensed cc-by-nc-sa-4.0 so not free for commercial usage. Also, as far as I know, the training data is 100% unavailable. Given they use well trained, but standard models, it isn't really open source and barely, maybe, open weight. I kinda hate how their repo says gpl cause that is only true for the inference code. The training code is closed source.
I’ve tested a bunch of vision models on particularly difficult documents (handwritten in a German script that’s no longer used), and I have yet to be impressed. They’re good at BSing to the point that you almost think they nailed it, until you realize that it’s mostly/all made-up text that doesn’t appear in the document.
Is it, though? If the important parts of the code are new, does it matter that other parts are older or derived from older code? (Of course, I think this whole line of thought is pointless; what matters is not age, but how well it works, and tesseract generally does seem to work.)
Even small upscale model trained on texts should do better than big generic.
I can't say I've ever wanted to transcribe code from an image. That seems super niche.
Perhaps the specific idea is to harvest coding textbooks as training data for LLMs?
Pieces is (correction: used to be, prior to the AI slopification) an app for storing code snippets. so i think you can imagine the general idea of, e.g., "cool API usage example from a YouTube video, let me screenshot it!"
I'm guessing to automatically scrape videos for future training rounds.
> can't say I've ever wanted to transcribe code from an image. That seems super niche.
This is nightmare for endpoint protection. Imagine rogue employees snapping pics of your proprietary codebase and then using this to reassemble it.
Eh, imagine poor documentation where people take screenshots of steps and don't write them out.
I can also imagine plenty of YouTube tutorials that type the code live... seems fairly useful
Neat article, but I feel like I have no idea why they're doing this! Is transcribing code from images really such a big use case?
Maybe they want to compile the Apollo Guidance Computer source code...
https://www.softwareheritage.org/wp-content/uploads/2019/07/...
If it's not a joke, I think it was already digitized: https://github.com/chrislgarry/Apollo-11
The product appears to be similar to Microsoft's embattled Recall feature. In order to remember your digital life it takes frequent screenshots.
From an accessibility standpoint, yes. To be able to pattern match where you are in I.D.E without using an accessibility api
> To best support software engineers when they want to transcribe code from images, we fine-tuned our pre-processing pipeline to screenshots of code in IDEs, terminals, and online resources like YouTube videos and blog posts.
Even with these examples that seems like a very narrow use case.
It worries me that stuff like that becoming easier will lead to wacky data pipelines being normalized (pulling display output off systems and "scraping" it to get data, of dubious quality, versus just building a proper interface). The kind of crowd that likes "low code" tools like MSFT's "Power Automate" is going to love to make Rube Goldberg nightmares out of tools like this.
It fills me with a deep sadness that we created deterministic machines then, though laziness, exploit every opportunity to "contaminate" them with sloppy practices that make them produce output with the same fuzzy inaccuracy as human brains.
Old man yells a neural networks take: We're entering a "The Machine Stops" era where nobody is going to know how to formulate basic algorithms.
"We need to add some numbers. Let's point a camera at the input, OCR it, then feed it to an LLM that 'knows math'. Then we don't have to figure out an algorithm to add numbers."
I wish compute "cost" more so people would be forced to actually make efficient use of hardware. Sadly, I think it'll take mass societal and infrastructure collapse for that to happen. Until it does, though, let the excess compute flow freely!
asimov - The feeling of power.
I guess it would be excellent to evade security monitors to take unauthorized copies of your employers codebase.
has anyone tried feeding the admittedly noisy OCR-ed text -at a document level - to an LLM for making sense? Presumably some of the less capable ones should be quite affordable and accurate at scale as well.
OCR is the biggest XY problem.
Stop accepting PDFs and force things to use APIs ...
Making OCR more accurate for regular text (e.g. data extraction from documents) would be useful; not sure how useful code transcription is
Anything that mentions tesseract is about 10 years out of date at this point.
Quite simply, you’re completely wrong. Modern tesseract versions include a modern LSTM AI. It can very affordably be deployed on CPU, yet its performance is competitive with much more expensive large GPU-based models. Especially if you handle a high volume of scans, chances are that tesseract will have the best bang per buck.
My company probably spent close to 6 figures overall creating Tesseract 5 custom models for various languages. Surya beats them all and is open source (and quite faster).
Surya weights for the models are licensed cc-by-nc-sa-4.0. They have an exception for small companies. If you're company is not small you either need to pay them or use them illegally.
Their training code and data is closed source. They are barely open weight and only inference is open source.
i remember that you could not train it your self in a font like you could in older versions, it that still the case?
5.5.0 released November last year. Still a very active project as far as I can tell and runs on CPU. Even compared to best open source GPU option it is still pretty good. VLMs work very differently and don't work as well for everything. Why is it out of date?
I don't know that that is true: https://researchify.io/blog/comparing-pytesseract-paddleocr-...
Using Surya gets you significantly better results and makes almost all the work detailed in the article largely unnecessary.
Surya weights for the models are licensed cc-by-nc-sa-4.0 so not free for commercial usage. Also, as far as I know, the training data is 100% unavailable. Given they use well trained, but standard models, it isn't really open source and barely, maybe, open weight. I kinda hate how their repo says gpl cause that is only true for the inference code. The training code is closed source.
Well, at least I can apt-get install tesseract.
That doesn't hold for any of the GPU-based solutions, last time I checked.
I just built a pipeline with tesseract last year. What's better that is open source and runnable locally?
VLLM hallucination is a blocker for my use case.
If you are stuck with open source, then your options are limited.
Otherwise I'd say just use your operating system's OCR API. Both Windows and MacOS have excellent APIs for this.
How is a hallucination worse than a Tesseract error?
Because the VLM doesn't know it hallucinated. When you get a Tesseract error you can flag the OCR job for manual review.
Latter is more likely to get debugged.
It could hallucinate obscene language, something which is less likely with classic OCR.
Hallucinations are hard to detect unless you are a subject-matter expert. I don't have direct experience with Tesseract error detection.
[dead]
Tesseract OCR was created by digital (DEC) in 19_8_5 (yes, 40 not four YEARs ago). Now go back and read the article and ROFL with me.
What is this argument? Much software we use today was created in the 80s.
Unix was created in _1971_ and here we are still running processes and shells like it’s the 70s. Why not just have an LLM dream up the output?
The original tesseract OCR has no neural nets. It bare little resemblance to the modern version.
It's still 40.
Why not use Ollama-OCR?
I’ve tested a bunch of vision models on particularly difficult documents (handwritten in a German script that’s no longer used), and I have yet to be impressed. They’re good at BSing to the point that you almost think they nailed it, until you realize that it’s mostly/all made-up text that doesn’t appear in the document.
> It's still 40.
Is it, though? If the important parts of the code are new, does it matter that other parts are older or derived from older code? (Of course, I think this whole line of thought is pointless; what matters is not age, but how well it works, and tesseract generally does seem to work.)
Because I benchmarked both on my dataset and found that Tesseract was better for my use-case?