9 comments

  • wonger_ 1 minute ago
    Great breakdown and visuals. Most ASCII filters do not account for glyph shape, and it's a shame.

    It reminds me of how chafa uses an 8x8 bitmap for each glyph: https://github.com/hpjansson/chafa/blob/master/chafa/interna...

    There's a lot of nitty gritty concerns I haven't dug into: how to make it fast, how to handle colorspaces, or like the author mentions, how to exaggerate contrast for certain scenes. But I think 99% of the time, it will be hard to beat chafa. Such a good library.

  • sph 1 hour ago
    Every example I thought "yeah, this is cool, but I can see there's space for improvement" — and lo! did the author satisfy my curiosity and improve his technique further.

    Bravo, beautiful article! The rest of this blog is at this same level of depth, worth a sub: https://alexharri.com/blog

  • chrisra 19 minutes ago
    > To increase the contrast of our sampling vector, we might raise each component of the vector to the power of some exponent.

    How do you arrive at that? It's presented like it's a natural conclusion, but if I was trying to adjust contrast... I don't see the connection.

  • nickdothutton 53 minutes ago
    What a great post. There is an element of ascii rendering in a pet project of mine and I’m definitely going to try and integrate this work. From great constraints comes great creativity.
  • nathaah3 1 hour ago
    that was so brilliant! i loved it! thanks for putting it out :)
  • adam_patarino 54 minutes ago
    Tell me someone has turned this into a library we can use
  • chrisra 10 minutes ago
    Next up: proportional fonts and font weights?
  • Jyaif 1 hour ago
    It's important to note that the approach described focuses on giving fast results, not the best results.

    Simply trying every character and considering their entire bitmap, and keeping the character that reduces the distance to the target gives better results, at the cost of more CPU.

    This is a well known problem because early computers with monitors used to only be able to display characters.

    At some point we were able to define custom character bitmap, but not enough custom characters to cover the entire screen, so the problem became more complex. Which new character do you create to reproduce an image optimally?

    And separately we could choose the foreground/background color of individual characters, which opened up more possibilities.

    • spuz 15 minutes ago
      Thinking more about the "best results". Could this not be done by transforming the ascii glyphs into bitmaps, and then using some kind of matrix multiplication or dot production calculation to calculate the ascii character with the highest similarity to the underlying pixel grid? This would presumably lend itself to SIMD or GPU acceleration. I'm not that familiar with this type of image processing so I'm sure someone with more experience can clarify.
    • brap 27 minutes ago
      You said “best results”, but I imagine that the theoretical “best” may not necessarily be the most aesthetically pleasing in practice.

      For example, limiting output to a small set of characters gives it a more uniform look which may be nicer. Then also there’s the “retro” effect of using certain characters over others.

    • Sharlin 57 minutes ago
      And a (the?) solution is using an algorithm like k-means clustering to find the tileset of size k that can represent a given image the most faithfully. Of course that’s only for a single frame at a time.
    • finghin 43 minutes ago
      In practice isn’t a large HashMap best for lookup, based on compile-time or static constants describing the character-space?
      • spuz 34 minutes ago
        In the appendix, he talks about reducing the lookup space by quantising the sampled points to just 8 possible values. That allowed him to make a look up table about 2MB in size which were apparently incredibly fast.
        • finghin 6 minutes ago
          I've been working on something similar (didn't get to this stage yet) and was planning to do something very similar to the circle-sampling method but the staggering of circles is a really clever idea I had never considered. I was planning on sampling character pixels' alignment along orthogonal and diagonal axes. You could probably combine these approaches. But yeah, such an approach seemed particularly powerful for the reason you could encode it all in a table.