quectophoton a day ago

Having the continuation bytes always start with the bits `10` also make it possible to seek to any random byte, and trivially know if you're at the beginning of a character or at a continuation byte like you mentioned, so you can easily find the beginning of the next or previous character.

If the characters were instead encoded like EBML's variable size integers[1] (but inverting 1 and 0 to keep ASCII compatibility for the single-byte case), and you do a random seek, it wouldn't be as easy (or maybe not even possible) to know if you landed on the beginning of a character or in one of the `xxxx xxxx` bytes.

[1]: https://www.rfc-editor.org/rfc/rfc8794#section-4.4

  • Animats a day ago

    Right. That's one of the great features of UTF-8. You can move forwards and backwards through a UTF-8 string without having to start from the beginning.

    Python has had troubles in this area. Because Python strings are indexable by character, CPython used wide characters. At one point you could pick 2-byte or 4-byte characters when building CPython. Then that switch was made automatic at run time. But it's still wide characters, not UTF-8. One emoji and your string size quadruples.

    I would have been tempted to use UTF-8 internally. Indices into a string would be an opaque index type which behaved like an integer to the extent that you could add or subtract small integers, and that would move you through the string. If you actually converted the opaque type to a real integer, or tried to subscript the string directly, an index to the string would be generated. That's an unusual case. All the standard operations, including regular expressions, can work on a UTF-8 representation with opaque index objects.

    • nostrademons a day ago

      PyCompactUnicodeObject was introduced with Python 3.3, and uses UTF-8 internally. It's used whenever both size and max code point are known, which is most cases where it comes from a literal or bytes.decode() call. Cut memory usage in typical Django applications by 2/3 when it was implemented.

      https://peps.python.org/pep-0393/

      I would probably use UTF-8 and just give up on O(1) string indexing if I were implementing a new string type. It's very rare to require arbitrary large-number indexing into strings. Most use-cases involve chopping off a small prefix (eg. "hex_digits[2:]") or suffix (eg. "filename[-3:]"), and you can easily just linear search these with minimal CPU penalty. Or they're part of library methods where you want to have your own custom traversals, eg. .find(substr) can just do Boyer-Moore over bytes, .split(delim) probably wants to do a first pass that identifies delimiter positions and then use that to allocate all the results at once.

      • barrkel 20 hours ago

        You usually want O(1) indexing when you're implementing views over a large string. For example, a string containing a possibly multi-megabyte text file and you want to avoid copying out of it, and work with slices where possible. Anything from editors to parsing.

        I agree though that usually you only need iteration, but string APIs need to change to return some kind of token that encapsulates both logical and physical index. And you probably want to be able to compute with those - subtract to get length and so on.

        • ori_b 19 hours ago

          You don't particularly want indexing for that, but cursors. A byte offset (wrapped in an opaque type) is sufficient for that need.

          • bjoli 10 hours ago

            You could add a LUT for decently fast indexing as well. I believe Java does that.

        • naniwaduni 18 hours ago

          You really just very rarely want codepoint indexing. A byte index is totally fine for view slices.

        • nostrademons 20 hours ago

          Sure, but for something like that whatever constructs the view can use an opaque index type like Animats suggested, which under the hood is probably a byte index. The slice itself is kinda the opaque index, and then it can just have privileged access to some kind of unsafe_byteIndex accessor.

          There are a variety of reasons why unsafe byte indexing is needed anyway (zero-copy?), it just shouldn’t be the default tool that application programmers reach for.

        • MrBuddyCasino 12 hours ago

          If you have multi-MB strings in an editor, that’s the problem right there. People use ropes instead of strings for a reason.

      • masklinn 14 hours ago

        > PyCompactUnicodeObject was introduced with Python 3.3, and uses UTF-8 internally.

        UTF8 is used for C level interactions, if it were just that being used there would be no need to know the highest code point.

        For Python semantics it uses one of ASCII, iso-8859-1, ucs2, or ucs4.

    • btown a day ago

      This is Python; finding new ways to subscript into things directly is a graduate student’s favorite pastime!

      In all seriousness I think that encoding-independent constant-time substring extraction has been meaningful in letting researchers outside the U.S. prototype, especially in NLP, without worrying about their abstractions around “a 5 character subslice” being more complicated than that. Memory is a tradeoff, but a reasonably predictable one.

      • meindnoch 7 hours ago

        >without worrying about their abstractions around “a 5 character subslice” being more complicated than that

        Combining characters still exist.

    • kccqzy 21 hours ago

      Indices into a Unicode string is a highly unusual operation that is rarely needed. A string is Unicode because it is provided by the user or a localized user-facing string. You don't generally need indices.

      Programmer strings (aka byte strings) do need indexing operations. But such strings usually do not need Unicode.

      • mjevans 19 hours ago

        They can happen to _be_ Unicode. Composition operations (for fully terminated Unicode strings) should work, but require eventual normalization.

        That's the other part of the resume UTF8 strings mid way, even combining broken strings still results in all the good characters present.

        Substring operations are more dicey; those should be operating with known strings. In pathological cases they might operate against portions of Unicode bits... but that's as silly as using raw pointers and directly mangling the bytes without any protection or design plans.

    • johncolanduoni 21 hours ago

      Your solution is basically what Swift does. Plus they do the same with extended grapheme clusters (what a human would consider distinct characters mostly), and that’s the default character type instead of Unicode code point. Easily the best Unicode string support of any programming language.

    • cryptonector 15 hours ago

      Variable width encodings like UTF-8 and UTF-16 cannot be indexed in O(1), only in O(N). But this is not really a problem! Instead of indexing strings we need to slice them, and generally we read them forwards, so if slices (and slices of slices) are cheap, then you can parse textual data without a problem. Basically just keep the indices small and there's no problem.

      • bjoli 10 hours ago

        Or just use immutsble strings and look-up-tales. Say, every 32 characters, combined with cursors. This is going to make indexing fast enough for randomly jumping into a striong and the using cursors.

    • zahlman 20 hours ago

      > If you actually converted the opaque type to a real integer, or tried to subscript the string directly, an index to the string would be generated.

      What conversion rule do you want to use, though? You either reject some values outright, bump those up or down, or else start with a character index that requires an O(N) translation to a byte index.

    • otabdeveloper4 12 hours ago

      "Unicode" aka "wide characters" is the dumbest engineering debacle of the century.

      > ascii and codepage encodings are legacy, let's standardize on another forwards-incompatible standard that will be obsolete in five years > oh, and we also need to upgrade all our infrastructure for this obsolete-by-design standard because we're now keeping it forever

      • Dylan16807 an hour ago

        What about Unicode isn't forward compatible?

        UCS-2 was an encoding mistake, but even it was pretty forward compatible

  • sparkie 20 hours ago

    VLQ/LEB128 are a bit better than the EBML's variable size integers. You test the MSB in the byte - `0` means it's the end of a sequence and the next byte is a new sequence. If the MSB is `1`, to find the start of the sequence you walk back until you find the first zero MSB at the end of the previous sequence (or the start of the stream). There are efficient SIMD-optimized implementations of this.

    The difference between VLQ and LEB128 is endianness, basically whether the zero MSB is the start or end of a sequence.

        0xxxxxxx                   - ASCII
        1xxxxxxx 0xxxxxxx          - U+0080 .. U+3FFF
        1xxxxxxx 1xxxxxxx 0xxxxxxx - U+4000 .. U+10FFFD
    
                          0xxxxxxx - ASCII
                 0xxxxxxx 1xxxxxxx - U+0080 .. U+3FFF
        0xxxxxxx 1xxxxxxx 1xxxxxxx - U+4000 .. U+10FFFD
    
    It's not self-synchronizing like UTF-8, but it's more compact - any unicode codepoint can fit into 3 bytes (which can encode up to 0x1FFFFF), and ASCII characters remain 1 byte. Can grow to arbitrary sizes. It has a fixed overhead of 1/8, whereas UTF-8 only has overhead of 1/8 for ASCII and 1/3 thereafter. Could be useful compressing the size of code that uses non-ASCII, since most of the mathematical symbols/arrows are < U+3FFF. Also languages like Japanese, since Katakana and Hiragana are also < U+3FFF, and could be encoded in 2 bytes rather than 3.
    • kstenerud 18 hours ago

      Unfortunately, VLQ/LEB128 is slow to process due to all the rolling decision points (one decision point per byte, with no ability to branch predict reliably). It's why I used a right-to-left unary code in my stuff: https://github.com/kstenerud/bonjson/blob/main/bonjson.md#le...

        | Header     | Total Bytes | Payload Bits |
        | ---------- | ----------- | ------------ |
        | `.......1` |      1      |       7      |
        | `......10` |      2      |      14      |
        | `.....100` |      3      |      21      |
        | `....1000` |      4      |      28      |
        | `...10000` |      5      |      35      |
        | `..100000` |      6      |      42      |
        | `.1000000` |      7      |      49      |
        | `10000000` |      8      |      56      |
        | `00000000` |      9      |      64      |
      
      The full value is stored little endian, so you simply read the first byte (low byte) in the stream to get the full length, and it has the exact same compactness of VLQ/LEB128 (7 bits per byte).

      Even better: modern chips have instructions that decode this field in one shot (callable via builtin):

      https://github.com/kstenerud/ksbonjson/blob/main/library/src...

          static inline size_t decodeLengthFieldTotalByteCount(uint8_t header) {
              return (size_t)__builtin_ctz(header) + 1;
          }
      
      After running this builtin, you simply re-read the memory location for the specified number of bytes, then cast to a little-endian integer, then shift right by the same number of bits to get the final payload - with a special case for `00000000`, although numbers that big are rare. In fact, if you limit yourself to max 56 bit numbers, the algorithm becomes entirely branchless (even if your chip doesn't have the builtin).

      https://github.com/kstenerud/ksbonjson/blob/main/library/src...

      It's one of the things I did to make BONJSON 35x faster to decode/encode compared to JSON.

      https://github.com/kstenerud/bonjson

      If you wanted to maintain ASCII compatibility, you could use a 0-based unary code going left-to-right, but you lose a number of the speed benefits of a little endian friendly encoding (as well as the self-synchronization of UTF-8 - which admittedly isn't so important in the modern world of everything being out-of-band enveloped and error-corrected). But it would still be a LOT faster than VLQ/LEB128.

      • sparkie 17 hours ago

        We can do better than one branch per byte - we can have it per 8-bytes at least.

        We'd use `vpmovb2m`[1] on a ZMM register (64-bytes at a time), which fills a 64-bit mask register with the MSB of each byte in the vector.

        Then process the mask register 1 byte at a time, using it as an index into a 256-entry jump table. Each entry would be specialized to process the next 8 bytes without branching, and finish with conditional branch to the next entry in the jump table or to the next 64-bytes. Any trailing ones in each byte would simply add them to a carry, which would be consumed up to the most significant zero in the next eightbytes.

        [1]:https://www.intel.com/content/www/us/en/docs/intrinsics-guid...

        • kstenerud 17 hours ago

          Sure, but with the above algorithm you could do it in zero branches, and in parallel if you like.

          • sparkie 16 hours ago

            Decoding into integers may be faster, but it's kind of missing the point why I suggested VLQs as opposed to EBML's variable length integers - they're not a good fit for string handling. In particular, if we wanted to search for a character or substring we'd have to start from the beginning of the stream and traverse linearly, because there's no synchronization - the payload bytes are indistinguishable from header bytes, making a parallel search not practical.

            While you might be able to have some heuristic to determine whether a character is a valid match, it may give false positives and it's unlikely to be as efficient as "test if the previous byte's MSB is zero". We can implement parallel search with VLQs because we can trivially synchronize the stream to next nearest character in either direction - it's partially-synchronizing.

            Obviously not as good as UTF-8 or UTF-16 which are self-synchronizing, but it can be implemented efficiently and cut encoding size.

  • deepsun a day ago

    That's assuming the text is not corrupted or maliciously modified. There were (are) _numerous_ vulnerabilities due to parsing/escaping of invalid UTF-8 sequences.

    Quick googling (not all of them are on-topic tho):

    https://www.rapid7.com/blog/post/2025/02/13/cve-2025-1094-po...

    https://www.cve.org/CVERecord/SearchResults?query=utf-8

    • s1mplicissimus a day ago

      I was just wondering a similar thing: If 10 implies start of character, doesn't that require 10 to never occur inside the other bits of a character?

      • gavinsyancey a day ago

        Generally you can assume byte-aligned access. So every byte of UTF-8 either starts with 0 or 11 to indicate an initial byte, or 10 to indicate a continuation byte.

      • pklausler 21 hours ago

        10 never implies the start of a character; those begin with 0 or 11.

      • dbaupp a day ago

        UTF-8 encodes each character into a whole number of bytes (8, 16, 24, or 32 bits), and the 10 continuation marker is only at the start of the extra continuation bytes, it is just data when that pattern occurs within a byte.

        You are correct that it never occurs at the start of a byte that isn’t a continuation bytes: the first byte in each encoded code point starts with either 0 (ASCII code points) or 11 (non-ASCII).

  • PaulHoule a day ago

    It's not uncommon when you want variable length encodings to write the number of extension bytes used in unary encoding

    https://en.wikipedia.org/wiki/Unary_numeral_system

    and also use whatever bits are left over encoding the length (which could be in 8 bit blocks so you write 1111/1111 10xx/xxxx to code 8 extension bytes) to encode the number. This is covered in this CS classic

    https://archive.org/details/managinggigabyte0000witt

    together with other methods that let you compress a text + a full text index for the text into less room than text and not even have to use a stopword list. As you say, UTF-8 does something similar in spirit but ASCII compatible and capable of fast synchronization if data is corrupted or truncated.

  • jridgewell 20 hours ago

    This isn't quite right. In invalid UTF8, a continuation byte can also emit a replacement char if it's the start of the byte sequence. Eg, `0b01100001 0b10000000 0b01100001` outputs 3 chars: a�a. Whether you're at the beginning of an output char depends on the last 1-3 bytes.

    • rockwotj 20 hours ago

      > outputs 3 chars

      You mean codepoints or maybe grapheme clusters?

      Anyways yeah it’s a little more complicated but the principle of being able to truncate a string without splitting a codepoint in O(1) is still useful

      • jridgewell 20 hours ago

        Yah, I was using char interchangeably with code point. I also used byte instead of code unit.

        > truncate a string without splitting a codepoint in O(1) is still useful

        Agreed!

  • cryptonector 15 hours ago

    This is referred to as UTF-8 being "self-synchronizing". You can jump to the middle and find a codepoint boundary. You can read it backwards. You can read it forwards.

  • procaryote a day ago

    also, the redundancy means that you get a pretty good heuristic for "is this utf-8". Random data or other encodings are pretty unlikely to also be valid utf-8, at least for non-tiny strings

  • spankalee a day ago

    Wouldn't you only need to read backwards at most 3 bytes to see if you were currently at a continuation byte? With a max multi-byte size of 4 bytes, if you don't see a multi-byte start character by then you would know it's a single-byte char.

    I wonder if a reason is similar though: error recovery when working with libraries that aren't UTF-8 aware. If you slice naively slice an array of UTF-8 bytes, a UTf-8 aware library can ignore malformed leading and trailing bytes and get some reasonable string out of it.

    • Sharlin 18 hours ago

      It’s not always possible to read backwards.

      • Dylan16807 15 hours ago

        Okay so you seek by 3 less bytes.

        Or you accept that if you're randomly losing chunks, you might lose an extra 3 bytes.

        The real problem is that seeking a few bytes won't work with EMBL. If continuation bytes store 8 payload bits, you can get into a situation where every single byte could be interpreted as a multi-byte start character and there are 2 or 3 possible messages that never converge.

        • Sharlin 6 hours ago

          The point is that you don’t have a "seek" operation available. You are given a bytestream and aren’t told if you’re at the start, in a valid position between code points, or in the middle of a code point. UTF-8’s self-synchronizing property means that by reading a single byte you immediately know if you’re in the middle of a code point, and that by reading and discarding at most two additional bytes you’re synchronized and can start/return decoding. That wouldn’t be possible if continuation bytes used all the bits for payload.

          • Dylan16807 2 hours ago

            Yes, the point is being able to synchronize.

            But it doesn't matter if it takes 1 byte or 3 bytes to synchronize. And being unable to read backwards is not a problem.

            (EMBL doesn't synchronize in three bytes but other encodings do.)

  • jancsika a day ago

    > Having the continuation bytes always start with the bits `10` also make it possible to seek to any random byte, and trivially know if you're at the beginning of a character or at a continuation byte like you mentioned, so you can easily find the beginning of the next or previous character.

    Given four byte maximum, it's a similarly trivial algo for the other case you mention.

    The main difference I see is that UTF8 increases the chance of catching and flagging an error in the stream. E.g., any non-ASCII byte that is missing from the stream is highly likely to cause an invalid sequence. Whereas with the other case you mention the continuation bytes would cause silent errors (since an ASCII character would be indecipherable from continuation bytes).

    Encoding gurus-- am I right?

  • 1oooqooq a day ago

    so you replace one costly sweeping with a costly sweeping. i wouldn't call that an advantage in any way over junping n bytes.

    what you describe is the bare minimum so you even know what you are searching for while you scan pretty much everything every time.

    • hk__2 a day ago

      What do you mean? What would you suggest instead? Fixed-length encoding? It would take a looot of space given all the character variations you can have.

      • gertop a day ago

        UTF-16 is both simpler to parse and more compact than utf-8 when writing non-english characters.

        UTF-8 didn't win on technical merits, it won becausw it was mostly backwards compatible with all American software that previously used ASCII only.

        When you leave the anglosphere you'll find that some languages still default to other encodings due to how large utf-8 ends up for them (Chinese and Japanese, to name two).

        • jcranmer 21 hours ago

          > UTF-16 is both simpler to parse and more compact than utf-8 when writing non-english characters.

          UTF-8 and UTF-16 take the same number of characters to encode a non-BMP character or a character in the range U+0080-U+07FF (which includes most of the Latin supplements, Greek, Cyrillic, Arabic, Hebrew, Aramaic, Syriac, and Thaana). For ASCII characters--which includes most whitespace and punctuation--UTF-8 takes half as much space as UTF-16, while characters in the range U+0800-U+FFFF, UTF-8 takes 50% more space than UTF-16. Thus, for most European languages, and even Arabic (which ain't European), UTF-8 is going to be more compact than UTF-16.

          The Asian languages (CJK-based languages, Indic languages, and South-East Asian, largely) are the ones that are more compact in UTF-16 than UTF-8, but if you embed those languages in a context likely to have significant ASCII content--such as an HTML file--well, it turns out the UTF-8 still wins out!

          > When you leave the anglosphere you'll find that some languages still default to other encodings due to how large utf-8 ends up for them (Chinese and Japanese, to name two).

          You'll notice that the encodings that are used are not UTF-16 either. Also, my understanding is that China generally defaults to UTF-8 nowadays despite a government mandate to use GB18030 instead, so it's largely Japan that is the last redoubt of the anti-Unicode club.

          • GoblinSlayer 3 hours ago

            And when you download many megabytes of jabbascript to render 4kb of text, how does it matter what encoding you use?

          • amake 14 hours ago

            Even Japan is mostly Unicode these days.

        • simonask 11 hours ago

          All of Europe outside of the UK and Enligh-speaking Ireland need characters outside of ASCII, but most letters are ASCII. For example, the string "blåbærgrød" in Danish (blueberry porridge) has about the densest occurrence of non-ASCII characters, but that's still only 30%. It takes 13 bytes in UTF-8, but 20 bytes in UTF-16.

          Spanish has generally at most one accented vowel (á, ó, ü, é, ...) per word, and generally at most one ñ per word. German rarely has more than two umlauts per word, and almost never more than one ß.

          UTF-16 is a wild pessimization for European languages, and UTF-8 is only slightly wasteful in Asian languages.

        • ISV_Damocles a day ago

          UTF-16 is also just as complicated as UTF-8 requiring multibyte characters to cover the entirety of Unicode, so it doesn't avoid the issue you're complaining about for the newest languages added, and it has the added complexity of a BOM being required to be sure you have the pairs of bytes in the right order, so you are more vulnerable to truncated data being unrecoverable versus UTF-8.

          UTF-32 would be a fair comparison, but it is 4 bytes per character and I don't know what, if anything, uses it.

          • Mikhail_Edoshin 16 hours ago

            No, UTF-16 is much simpler in that aspect. And its design is no less brilliant. (I've written an state machine encoder and decoder for both these encodings.) If an application works a lot with text I'd say UTF-16 looks more attractive for the main internal representation.

            • rmunn 12 hours ago

              UTF-16 is simpler most of the time, and that's precisely the problem. Anyone working with UTF-8 knows they will have to deal with multibyte codepoints. People working with UTF-16 often forget about surrogate characters, because they're a lot rarer in most major languages, and then end up with bugs when their users put emoji into a text field.

          • adgjlsfhk1 a day ago

            python does (although it will use 8 or 16 bits per character if all characters in the string fit)

        • airza 8 hours ago

          It's all fun and games until you hit an astral plane character in utf-16 and one of the library designers didn't realize not all characters are 2 bytes.

          • rmunn 3 hours ago

            Which is why I've seen lots of people recommend testing your software with emojis, particularly recently-added emojis (many of the earlier emojis were in the basic multilingual plane, but a lot of newer emojis are outside the BMP, i.e. the "astral" planes). It's particularly fun to use the (U+1F4A9) emoji for such testing, because of what it implies about the libraries that can't handle it correctly.

            EDIT: Heh. The U+1F4A9 emoji that I included in my comment was stripped out. For those who don't recognize that codepoint by hand (can't "see" the Matrix just from its code yet?), that emoji's official name is U+1F4A9 PILE OF POO.

            • GoblinSlayer 3 hours ago

              For more fun you can use flag characters.

        • kbolino a day ago

          Thanks to UTF-16, which came out after UTF-8, there are 2048 wasted 3-byte sequences in UTF-8.

          And unlike the short-sighted authors of the first version of Unicode, who thought the whole world's writing systems could fit in just 65,536 distinct values, the authors of UTF-8 made it possible to encode up to 2 billion distinct values in the original design.

          • xigoi 9 hours ago

            Thanks to UTF-8, there are 13 wasted 1-byte sequences in UTF-8 :P

            • kbolino 7 hours ago

              Assuming your count is accurate, then 9 (edit: corrected from 11) of those 13 are also UTF-16's fault. The only bytes that were impossible in UTF-8's original design were 0b11111110 and 0b11111111. Remember that UTF-8 could handle up to 6-byte sequences originally.

              Now all of this hating on UTF-16 should not be misconstrued as some sort of encoding religious war. UTF-16 has a valid purpose. The real problem was Unicode's first version getting released at a critical time and thus its 16-bit delusion ending up baked into a bunch of important software. UTF-16 is a pragmatic compromise to adapt that software so it can continue to work with a larger code space than it originally could handle. Short of rewiting history, it will stay with us forever. However, that doesn't mean it needs to be transmitted over the wire or saved on disk any more often than necessary.

              Use UTF-8 for most purposes especially new formats, use UTF-16 only when existing software requires it, and use UTF-32 (or some other sequence of full code points) only internally/ephemerally to convert between the other two and perform high-level string functions like grapheme cluster segmentation.

              • xigoi 6 hours ago

                Pretty sure 0b11000000 and 0b11000001 are also UTF-8’s fault. Good point with the others, I guess. And I agree about UTF-8 being the best, just found it funny.

                • kbolino 6 hours ago

                  Yep, you're right. Those two bytes are forbidden to prevent overlong encodings. A number of multibyte sequences are forbidden for the same reason too.

                  A true flaw of UTF-8 in the long run. They should have biased the values of multibyte sequences to remove redundant encodings.

        • cyphar a day ago

          UTF-16 is absolutely not easier to work with. The vast majority of bugs I remember having to fix that were directly related to encoding were related to surrogate pairs. I suspect most programs do not handle them correctly because they come up so rarely but the bugs you see are always awful. UTF-8 doesn't have this problem and I think that's enough reason to avoid UTF-16 (though "good enough" compatibility with programs that only understand 8-bit-clean ASCII is an even better practical reason). Byte ordering is also a pernicious problem (with failure modes like "all of my documents are garbled") that UTF-8 also completely avoids.

          It is 33% more compact for most (but not all) CJK characters, but that's not the case for all non-English characters. However, one important thing to remember is that most computer-based documents contain large amounts of ASCII text purely because the formats themselves use English text and ASCII punctuation. I suspect that most UTF-8 files with CJK contents are much smaller than UTF-16 files, but I'd be interested in an actual analysis from different file formats.

          The size argument (along with a lot of understandable contention around UniHan) is one of the reasons why UTF-8 adoption was slower in Japan and Shift-JIS is not completely dead (though mainly for esoteric historical reasons like the 漢検 test rather than active or intentional usage) but this is quite old history at this point. UTF-8 now makes up 99% of web pages.

          • cyphar 20 hours ago

            I went through a Japanese ePUB novel I happened to have on hand (the Japanese translation of 1984) and 65% of the bytes are ASCII bytes. So in this case UTF-16 would end up resulting in something like 53% more bytes (going by napkin math).

            You could argue that because it will be compressed (and UTF-16 wastes a whole NUL byte for all ASCII) that the total file-size for the compressed version would be better (precisely because there are so many wasted bytes) but there are plenty of examples where files aren't compressed and most systems don't have compressed memory so you will pay the cost somewhere.

            But in the interest of transparency, a very crude test of the same ePUB yields a 10% smaller file with UTF-16. I think a 10% size penalty (in a very favourable scenario for UTF-16) in exchange for all of the benefits of UTF-8 is more than an acceptable tradeoff, and the incredibly wide proliferation of UTF-8 implies most people seem to agree.

        • adgjlsfhk1 a day ago

          With BOM issues, UTF-16 is way more complicated. For Chinese and Japenese, UTF8 is a maximum of 50% bigger, but can actually end up smaller if used within standard file formats like JSON/HTML since all the formatting characters and spaces are single bytes.

        • syncsynchalt a day ago

          UTF-16 has endian concerns and surrogates.

          Both UTF-8 and UTF-16 have negatives but I don't think UTF-16 comes out ahead.

          • Mikhail_Edoshin 15 hours ago

            Here is what an UTF-8 decoder needs to handle:

            1. Invalid bytes. Some bytes cannot appear in an UTF-8 string at all. There are two ranges of these.

            2. Conditionally invalid continuation bytes. In some states you read a continuation byte and extract the data, but in some other cases the valid range of the first continuation byte is further restricted.

            3. Surrogates. They cannot appear in a valid UTF-8 string, so if they do, this is an error and you need to mark it so. Or maybe process them as in CESU but this means to make sure they a correctly paired. Or maybe process them as in WTF-8, read and let go.

            4. Form issues: an incomplete sequence or a continuation byte without a starting byte.

            It is much more complicated than UTF-16. UTF-16 only has surrogates that are pretty straightforward.

            • syncsynchalt 36 minutes ago

              I've written some Unicode transcoders; UTF-8 decoding devolves to a quartet of switch statements and each of the issues you've mentioned end up being a case statement where the solution is to replace the offending sequence with U+FFFD.

              UTF-16 is simple as well but you still need code to absorb BOMs, perform endian detection heuristically if there's no BOM, and check surrogate ordering (and emit a U+FFFD when an illegal pair is found).

              I don't think there's an argument for either being complex, the UTFs are meant to be as simple and algorithmic as possible. -8 has to deal with invalid sequences, -16 has to deal with byte ordering, other than that it's bit shifting akin to base64. Normalization is much worse by comparison.

              My preference for UTF-8 isn't one of code complexity, I just like that all my 70's-era text processing tools continue working without too many surprises. The features like self-synchronization are nice too compared to what we _could_ have gotten as UTF-8.

        • kccqzy 21 hours ago

          Two decades ago the typical simplified Chinese website did in fact use GB2312 and not UTF-8; traditional Chinese website used Big5; Japanese sites used Shift JIS. These days that's not true at all. Your comment is twenty years out of date.

        • Iwan-Zotow 18 hours ago

          There are no sane Chinese Japanese people who uses old encodings. None

  • thesz a day ago

    > so you can easily find the beginning of the next or previous character.

    It is not true [1]. While it is not UTF-8 problem per se, it is a problem of how UTF-8 is being used.

    [1] https://paulbutler.org/2025/smuggling-arbitrary-data-through...

twoodfin a day ago

UTF-8 is indeed a genius design. But of course it’s crucially dependent on the decision for ASCII to use only 7 bits, which even in 1963 was kind of an odd choice.

Was this just historical luck? Is there a world where the designers of ASCII grabbed one more bit of code space for some nice-to-haves, or did they have code pages or other extensibility in mind from the start? I bet someone around here knows.

  • mort96 a day ago

    I don't know if this is the reason or if the causality goes the other way, but: it's worth noting that we didn't always have 8 general purpose bits. 7 bits + 1 parity bit or flag bit or something else was really common (enough so that e-mail to this day still uses quoted-printable [1] to encode octets with 7-bit bytes). A communication channel being able to transmit all 8 bits in a byte unchanged is called being 8-bit clean [2], and wasn't always a given.

    In a way, UTF-8 is just one of many good uses for that spare 8th bit in an ASCII byte...

    [1] https://en.wikipedia.org/wiki/Quoted-printable

    [2] https://en.wikipedia.org/wiki/8-bit_clean

    • ajross a day ago

      "Five characters in a 36 bit word" was a fairly common trick on pre-byte architectures too.

      • bear8642 a day ago

        5 characters?

        I thought it was normally six 6bit characters?

        • mort96 a day ago

          The relevant Wikipedia page (https://en.wikipedia.org/wiki/36-bit_computing)indicates that 6x6 was the most common, but that 5x7 was sometimes used as well.

          ... However I'm not sure how much I trust it. It says that 5x7 was "the usual PDP-6/10 convention" and was called "five-seven ASCII", but I can't find the phrase "five-seven ASCII" anywhere on Google except for posts quoting that Wikipedia page. It cites two references, neither of which contain the phrase "five-seven ascii".

          Though one of the references (RFC 114, for FTP) corroborates that PDP-10 could use 5x7:

              [...] For example, if a
              PDP-10 receives data types A, A1, AE, or A7, it can store the
              ASCII characters five to a word (DEC-packed ASCII).  If the
              datatype is A8 or A9, it would store the characters four to a
              word.  Sixbit characters would be stored six to a word.
          
          To me, it seems like 5x7 was one of multiple conventions you could store character data in a PDP-10 (and probably other 36-bit machines), and Wikipedia hallucinated that the name for this convention is "five-seven ASCII". (For niche topics like this, I sometimes see authors just stating their own personal terminology for things as a fact; be sure to check sources!).
          • Agraillo 9 hours ago

            I like challenges like this. First, the edit that introduced the "five-seven ascii" is [1] (2010) by Pete142 with the explanation "add a name for the PDP-6/10 character-packing convention". The user Pete142 cites his web page www.pwilson.net that no longer serves his content. Sure it can be accessed with archive.org and from the resume the earliest year mentioned is 1986 ( MS-DOS/ASM/C drivers Technical Leader: ...). I suspect that he himself might have use the term when working and probably this jargon word/phrase didn't survive to a reliable book/research.

            [1] https://en.wikipedia.org/w/index.php?title=36-bit_computing&...

            • ajross 5 hours ago

              You do better with a search for "PDP-10 packed ascii". In point of fact the PDP-10 had explicit instructions for managing strings of 7-bit ascii characters like this.

          • bobmcnamara 15 hours ago

            I've run into 5-7 encoding in some ancient serial protocol. Layers of cruft.

        • ajross 16 hours ago

          That was true at the system level on ITS, file and command names were all 6 bit. But six bits doesn't leave space for important code points (like "lower case") needed for text processing. More practical stuff on PDP-6/10 and pre-360 IBM played other tricks.

  • jasonwatkinspdx a day ago

    Not an expert but I happened to read about some of the history of this a while back.

    ASCII has its roots in teletype codes, which were a development from telegraph codes like Morse.

    Morse code is variable length, so this made automatic telegraph machines or teletypes awkward to implement. The solution was the 5 bit Baudot code. Using a fixed length code simplified the devices. Operators could type Baudot code using one hand on a 5 key keyboard. Part of the code's design was to minimize operator fatigue.

    Baudot code is why we refer to the symbol rate of modems and the like in Baud btw.

    Anyhow, the next change came with instead of telegraph machines directly signaling on the wire, instead a typewriter was used to create a punched tape of codepoints, which would be loaded into the telegraph machine for transmission. Since the keyboard was now decoupled from the wire code, there was more flexibility to add additional code points. This is where stuff like "Carriage Return" and "Line Feed" originate. This got standardized by Western Union and internationally.

    By the time we get to ASCII, teleprinters are common, and the early computer industry adopted punched cards pervasively as an input format. And they initially did the straightforward thing of just using the telegraph codes. But then someone at IBM came up with a new scheme that would be faster when using punch cards in sorting machines. And that became ASCII eventually.

    So zooming out here the story is that we started with binary codes, then adopted new schemes as technology developed. All this happened long before the digital computing world settled on 8 bit bytes as a convention. ASCII as bytes is just a practical compromise between the older teletype codes and the newer convention.

    • pcthrowaway a day ago

      > But then someone at IBM came up with a new scheme that would be faster when using punch cards in sorting machines. And that became ASCII eventually.

      Technically, the punch card processing technology was patented by inventor Herman Hollerith in 1884, and the company he founded wouldn't become IBM until 40 years later (though it was folded with 3 other companies into the Computing-Tabulating-Recording company in 1911, which would then become IBM in 1924).

      To be honest though, I'm not clear how ASCII came from anything used by the punch card sorting machines, since it wasn't proposed until 1961 (by an IBM engineer, but 32 years after Hollerith's death). Do you know where I can read more about the progression here?

      • jasonwatkinspdx 6 hours ago

        It's right there in the history section of the wiki page: https://en.wikipedia.org/wiki/ASCII#History

        > Work on the ASCII standard began in May 1961, when IBM engineer Bob Bemer submitted a proposal to the American Standards Association's (ASA) (now the American National Standards Institute or ANSI) X3.2 subcommittee.[7] The first edition of the standard was published in 1963,[8] contemporaneously with the introduction of the Teletype Model 33. It later underwent a major revision in 1967,[9][10] and several further revisions until 1986.[11] In contrast to earlier telegraph codes such as Baudot, ASCII was ordered for more convenient collation (especially alphabetical sorting of lists), and added controls for devices other than teleprinters.[11]

        Beyond that I think you'd have to dig up the old technical reports.

      • zokier a day ago

        IBM also notably used EBCDIC instead of ASCII for most of their systems

        • timsneath 19 hours ago

          And just for fun, they also support what must be the most weird encoding system -- UTF-EBCDIC (https://www.ibm.com/docs/en/i/7.5.0?topic=unicode-utf-ebcdic).

          • kstrauser 19 hours ago

            Post that stuff with a content warning, would you?

            > The base EBCDIC characters and control characters in UTF-EBCDIC are the same single byte codepoint as EBCDIC CCSID 1047 while all other characters are represented by multiple bytes where each byte is not one of the invariant EBCDIC characters. Therefore, legacy applications could simply ignore codepoints that are not recognized.

            Dear god.

            • necovek 18 hours ago

              That says roughly the following when applied to UTF-8:

              "The base ASCII characters and control characters in UTF-8 are the same single byte codepoint as ISO-8859-1 while all other characters are represented by multiple bytes where each byte is not one of the invariant ASCII characters. Therefore, legacy applications could simply ignore codepoints that are not recognized."

              (I know nothing of EBCDIC, but this seems to mirror UTF-8 design)

  • cryptonector 14 hours ago

    Fun fact: ASCII was a variable length encoding. No really! It was designed so that one could use overstrike to implement accents and umlauts, and also underline (which still works like that in terminals). I.e., á would be written a BS ' (or ' BS a), à would be written as a BS ` (or ` BS a), ö would be written o BS ", ø would be written as o BS /, ¢ would be written as c BS |, and so on and on. The typefaces were designed to make this possible.

    This lives on in compose key sequences, so instead of a BS ' one types compose-' a and so on.

    And this all predates ASCII: it's how people did accents and such on typewriters.

    This is also why Spanish used to not use accents on capitals, and still allows capitals to not have accents: that would require smaller capitals, but typewriters back then didn't have them.

  • layer8 a day ago

    The use of 8-bit extensions of ASCII (like the ISO 8859-x family) was ubiquitous for a few decades, and arguably still is to some extent on Windows (the standard Windows code pages). If ASCII had been 8-bit from the start, but with the most common characters all within the first 128 integers, which would seem likely as a design, then UTF-8 would still have worked out pretty well.

    The accident of history is less that ASCII happens to be 7 bits, but that the relevant phase of computer development happened to primarily occur in an English-speaking country, and that English text happens to be well representable with 7-bit units.

    • necovek 17 hours ago

      Most languages are well representable with 128 characters (7-bits) if you do not include English characters among those (eg. replace those 52 characters and some control/punctuation/symbols).

      This is easily proven by the success of all the ISO-8859-*, Windows and IBM CP-* encodings, and all the *SCII (ISCII, YUSCII...) extensions — they fit one or more languages in the upper 128 characters.

      It's mostly CJK out of large languages that fail to fit within 128 characters as a whole (though there are smaller languages too).

    • cryptonector 14 hours ago

      Many of the extended characters in ISO 8859-* can be implemented using pure ASCII with overstriking. ASCII was designed to support overstriking for this purpose. Overstriking was how one typed many of those characters on typewriters.

    • ezequiel-garzon 8 hours ago

      Before this happened, 7-bit ASCII variants based on ISO 646 were widely used.

  • toast0 a day ago

    7 bits isn't that odd. Bauddot was 5 bits, and found insufficient, so 6 bit codes were developed; they were found insufficient, so 7-bit ASCII was developed.

    IBM had standardized 8-bit bytes on their System/360, so they developed the 8-bit EBCDIC encoding. Other computing vendors didn't have consistent byte lengths... 7-bits was weird, but characters didn't necessarily fit nicely into system words anyway.

    • Dylan16807 14 hours ago

      I don't really say this to disagree with you, but I feel weird about the phrasing "found insufficient", as if we reevaluated and said 'oops'.

      It's not like 5-bit codes forgot about numbers and 80% of punctuation, or like 6-bit codes forgot about having upper and lower case letters. They were clearly 'insufficient' for general text even as the tradeoff was being made, it's just that each bit cost so much we did it anyway.

      The obvious baseline by the time we were putting text into computers was to match a typewriter. That was easy to see coming. And the symbols on a typewriter take 7 bits to encode.

      • gugagore 9 hours ago

        Also, statefullness. Baudot has two codes used for switching into one of two modes: figures and letters.

        Typewriters have some statefullness, too, like "shift lock". Baudot needed to encode the actions of a type writer to control it, not the output.

      • chuckadams 4 hours ago

        In fact, Baudot originally used a 6-bit code and later shortened it to 5.

  • colejohnson66 a day ago

    The idea was that the free bit would be repurposed, likely for parity.

    • layer8 a day ago

      When ASCII was invented, 36-bit computers were popular, which would fit five ASCII characters with just one unused bit per 36-bit word. Before, 6-bit character codes were used, where a 36-bit word could fit six of them.

    • KPGv2 a day ago

      This is not true. ASCII (technically US-ASCII) was a fixed-width encoding of 7 bits. There was no 8th bit reserved. You can read the original standard yourself here: https://ia600401.us.archive.org/23/items/enf-ascii-1968-1970...

      Crucially, "the 7-bit coded character set" is described on page 6 using only seven total bits (1-indexed, so don't get confused when you see b7 in the chart!).

      There is an encoding mechanism to use 8 bits, but it's for storage on a type of magnetic tape, and even that still is silent on the 8th bit being repurposed. It's likely, given the lack of discussion about it, that it was for ergonomic or technical purposes related to the medium (8 is a power of 2) rather than for future extensibility.

      • kbolino a day ago

        Notably, it is mentioned that the 7-bit code is developed "in anticipation of" ISO requesting such a code, and we see in the addenda attached at the end of the document that ISO began to develop 8-bit codes extending the base 7-bit code shortly after it was published.

        So, it seems that ASCII was kept to 7 bits primarily so "extended ASCII" sets could exist, with additional characters for various purposes (such as other languages, but also for things like mathematical symbols).

      • zokier a day ago

        Mackenzie claims that parity was explicit concern for selecting 7 bit code for ASCII. He cites X3.2 subcommittee, although does not provide any references which document exactly, but considering that he was member of those committees (as far as I can tell) I would put some weight to his word.

        https://hcs64.com/files/Mackenzie%20-%20Coded%20Character%20... sections 13.6 and 13.7

    • EGreg a day ago

      I would love to think this is true, and it makes sense, but do you have any actual evidence for this you could share with HN?

  • KPGv2 a day ago

    Historical luck. Though "luck" is probably pushing it in the way one might say certain math proofs are historically "lucky" based on previous work. It's more an almost natural consequence.

    Before ASCII there was BCDIC, which was six bits and non-standardized (there were variants, just like technically there are a number of ASCII variants, with the common just referred to as ASCII these days).

    BCDIC was the capital English letters plus common punctuation plus numbers. 2^6 is 64, and for capital letters + numbers, you have 36, plus a few common punctuation marks puts you around 50. IIRC the original by IBM was around 45 or something. Slash, period, comma, tc.

    So when there was a decision to support lowercase, they added a bit because that's all that was necessary, and I think the printers around at the time couldn't print anything but something less than 128 characters anyway. There wasn't any ó or ö or anything printable, so why support it?

    But eventually that yielded to 8-bit encodings (various ASCIIs like latin-1 extended, etc. that had ñ etc.).

    Crucially, UTF-8 is only compatible with the 7-bit ASCII. All those 8-bit ASCIIs are incompatible with UTF-8 because they use the eighth bit.

  • michaelsshaw a day ago

    I'm not sure, but it does seem like a great bit of historical foresight. It stands as a lesson to anyone standardizing something: wanna use a 32 bit integer? Make it 31 bits. Just in case. Obviously, this isn't always applicable (e.g. sizes, etc..), but the idea of leaving even the smallest amount of space for future extensibility is crucial.

vintermann a day ago

UTF-8 is as good as a design as could be expected, but Unicode has scope creep issues. What should be in Unicode?

Coming at it naively, people might think the scope is something like "all sufficiently widespread distinct, discrete glyphs used by humans for communication that can be printed". But that's not true, because

* It's not discrete. Some code points are for combining with other code points.

* It's not distinct. Some glyphs can be written in multiple ways. Some glyphs which (almost?) always display the same, have different code points and meanings.

* It's not all printable. Control characters are in there - they pretty much had to be due to compatibility with ASCII, but they've added plenty of their own.

I'm not aware of any Unicode code points that are animated - at least what's printable, is printable on paper and not just on screen, there are no marquee or blink control characters, thank God. But, who knows when that invariant will fall too.

By the way, I know of one utf encoding the author didn't mention, utf-7. Like utf-8, but assuming that the last bit wasn't safe to use (apparently a sensible precaution over networks in the 80s). My boss managed to send me a mail encoded in utf-7 once, that's how I know what it is. I don't know how he managed to send it, though.

  • Cloudef 19 hours ago

    Indeed, one pain point of unicode is CJK unification. https://heistak.github.io/your-code-displays-japanese-wrong/

    • asddubs 10 hours ago

      the fact that there is seemingly no interest in fixing this, and if you want chinese and japanese in the same document, you're just fucked, forever, is crazy to me.

      They should add separate code points for each variant and at least make it possible to avoid the problem in new documents. I've heard the arguments against this before, but the longer you wait, the worse the problem gets.

      • meindnoch 2 hours ago

        What happens if you want both single-storey "a" and double-storey "a" in the same document? You use a different font.

        • eviks 2 hours ago

          Some fonts allow for both alternatives in them

      • eviks 2 hours ago

        Why is the language tag not used to signal a variant?

      • Cloudef 4 hours ago

        Afaik theres some language hints nowadays but its kinda hack

  • cryptonector 6 hours ago

    > * It's not discrete. Some code points are for combining with other code points.

    This isn't "scope creep". It's a reflection of reality. People were already constructing compositions like this is real life. The normalization problem was unavoidable.

  • pornel 17 hours ago

    Unicode wanted ability to losslessly roundtrip every other encoding, in order to be easy to partially adopt in a world where other encodings were still in use. It merged a bunch of different incomplete encodings that used competing approaches. That's why there are multiple ways of encoding the same characters, and there's no overall consistency to it. It's hard to say whether that was a mistake. This level of interoperability may have been necessary for Unicode to actually win, and not be another episode of https://xkcd.com/927

    • panpog 2 hours ago

      Why did Unicode want codepointwise round-tripping? One codepoint in a legacy encoding becoming two in Unicode doesn't seem like it should have been a problem. In other words, why include precomposed characters in Unicode?

  • syncsynchalt 21 hours ago

    UTF-7 is/was mostly for email, which is not an 8-bit clean transport. It is obsolete and can't encode supplemental planes (except via surrogate pairs, which were meant for UTF-16).

    There is also UTF-9, from an April Fools RFC, meant for use on hosts with 36-bit words such as the PDP-10.

    • syncsynchalt 19 hours ago

      I meant to specify, the aim of UTF-7 is better performed by using UTF-8 with `Content-Transfer-Encoding: quoted-printable`

hyperman1 a day ago

One thing I always wonder: It is possible to encode a unicode codepoint with too much bytes. UTF-8 forbids these, only the shortest one is valid. E.g 00000001 is the same as 11000000 10000001.

So why not make the alternatives impossible by adding the start of the last valid option? So 11000000 10000001 would give codepoint 128+1 as values 0 to 127 are already covered by a 1 byte sequence.

The advantages are clear: No illegal codes, and a slightly shorter string for edge cases. I presume the designers thought about this, so what were the disadvantages? The required addition being an unacceptable hardware cost at the time?

UPDATE: Last bitsequence should of course be 10000001 and not 00000001. Sorry for that. Fixed it.

  • toast0 a day ago

    The siblings so far talk about the synchronizing nature of the indicators, but that's not relevant to your question. Your question is more of

    Why is U+0080 encoded as c2 80, instead of c0 80, which is the lowest sequence after 7f?

    I suspect the answer is

    a) the security impacts of overlong encodings were not contemplated; lots of fun to be had there if something accepts overlong encodings but is scanning for things with only shortest encodings

    b) utf-8 as standardized allows for encode and decode with bitmask and bitshift only. Your proposed encoding requires bitmask and bitshift, in addition to addition and subtraction

    You can find a bit of email discussion from 1992 here [1] ... at the very bottom there's some notes about what became utf-8:

    > 1. The 2 byte sequence has 2^11 codes, yet only 2^11-2^7 are allowed. The codes in the range 0-7f are illegal. I think this is preferable to a pile of magic additive constants for no real benefit. Similar comment applies to all of the longer sequences.

    The included FSS-UTF that's right before the note does include additive constants.

    [1] https://www.cl.cam.ac.uk/~mgk25/ucs/utf-8-history.txt

    • hyperman1 a day ago

      Oops yeah. One of my bit sequences is of course wrong and seems to have derailed this discussion. Sorry for that. Your interpretation is correct.

      I've seen the first part of that mail, but your version is a lot longer. It is indeed quite convincing in declaring b) moot. And security was not that big of a thing then as it is now, so you're probalbly right

    • layer8 a day ago

      A variation of a) is comparing strings as UTF-8 byte sequences if overlong encodings are also accepted (before and/or later). This leads to situations where strings tested as unequal are actually equal in terms of code points.

      • torstenvl a day ago

        Ehhh I view things slightly differently. Overlong encodings are per se illegal, so they cannot encode code points, even if a naive algorithm would consistently interpret them as such.

        I get what you mean, in terms of Postel's Law, e.g., software that is liberal in what it accepts should view 01001000 01100101 01101010 01101010 01101111 as equivalent to 11000001 10001000 11000001 10100101 11000001 10101010 11000001 10101010 11000001 10101111, despite the sequence not being byte-for-byte identical. I'm just not convinced Postel's Law should be applied wrt UTF-8 code units.

        • layer8 a day ago

          The context of my comment was (emphasis mine): “lots of fun to be had there if something accepts overlong encodings but is scanning for things with only shortest encodings”.

          Yes, software shouldn’t accept overlong encodings, and I was pointing out another bad thing that can happen with software that does accept overlong encodings, thereby reinforcing the advice to not accept them.

  • nostrademons a day ago

    I assume you mean "11000000 10000001" to preserve the property that all continuation bytes start with "10"? [Edit: looks like you edited that in]. Without that property, UTF-8 loses self-synchronicity, the property that given a truncated UTF-8 stream, you can always find the codepoint boundaries, and will lose at most codepoint worth rather than having the whole stream be garbled.

    In theory you could do it that way, but it comes at the cost of decoder performance. With UTF-8, you can reassemble a codepoint from a stream using only fast bitwise operations (&, |, and <<). If you declared that you had to subtract the legal codepoints represented by shorter sequences, you'd have to introduce additional arithmetic operations in encoding and decoding.

  • rhet0rica a day ago

    See quectophoton's comment—the requirement that continuation bytes are always tagged with a leading 10 is useful if a parser is jumping in at a random offset—or, more commonly, if the text stream gets fragmented. This was actually a major concern when UTF-8 was devised in the early 90s, as transmission was much less reliable than it is today.

  • gpvos 21 hours ago

    That would make the calculations more complicated and a little slower. Now you can do a few quick bit shifts. This was more of an issue back in the '90s when UTF-8 was designed and computers were slower.

  • umanwizard a day ago

    Because then it would be impossible to tell from looking at a byte whether it is the beginning of a character or not, which is a useful property of UTF-8.

  • rightbyte a day ago

    I think that would garble random access?

happytoexplain a day ago

I have a love-hate relationship with backwards compatibility. I hate the mess - I love when an entity in a position of power is willing to break things in the name of advancement. But I also love the cleverness - UTF-8, UTF-16, EAN, etc. To be fair, UTF-8 sacrifices almost nothing to achieve backwards compat though.

  • amluto a day ago

    > To be fair, UTF-8 sacrifices almost nothing to achieve backwards compat though.

    It sacrifices the ability to encode more than 21 bits, which I believe was done for compatibility with UTF-16: UTF-16’s awful “surrogate” mechanism can only express code units up to 2^21-1.

    I hope we don’t regret this limitation some day. I’m not aware of any other material reason to disallow larger UTF-8 code units.

    • mort96 a day ago

      That isn't really a case of UTF-8 sacrificing anything to be compatible with UTF-16. It's Unicode, not UTF-8 that made the sacrifice: Unicode is limited to 21 bits due to UTF-16. The UTF-8 design trivially extends to support 6 byte long sequences supporting up to 31-bit numbers. But why would UTF-8, a Unicode character encoding, support code points which Unicode has promised will never and can never exist?

      • MyOutfitIsVague a day ago

        In an ideal future (read: fantasy), utf-16 gets formally deprecated and trashed, freeing the surrogate sequences and full range for utf-8.

        Or utf-16 is officially considered a second class citizen, and some code points are simply out of its reach.

      • GuB-42 16 hours ago

        Is 21 bits really a sacrifice. It is 2 million codepoints, we currently use about a tenth of that.

        Even with all Chinese characters, de-unified, all the notable historical and constructed scripts, technical symbols, and all the submitted emoji, including rejections, you are still way short of a million.

        We are probably never need more than 21 bits unless we start stretching the definition of what text is.

        • moefh 15 hours ago

          It's not 2 million, it's a little over 1 million.

          The exact number is 1112064 = 2^16 - 2048 + 16*2^16: in UTF-16, 2 bytes can encode 2^16 - 2048 code points, and 4 bytes can encode 16*2^16 (the 2048 surrogates are not counted because they can never appear by themselves, they're used purely for UTF-16 encoding).

          • chuckadams 4 hours ago

            Even with just 1 million codepoints, why did they feel the need for CJK unification? Was it so it would all fit in UCS-2 or something?

    • throw0101d a day ago

      > It sacrifices the ability to encode more than 21 bits, which I believe was done for compatibility with UTF-16: UTF-16’s awful “surrogate” mechanism can only express code units up to 2^21-1z

      Yes, it is 'truncated' to the "UTF-16 accessible range":

      * https://datatracker.ietf.org/doc/html/rfc3629#section-3

      * https://en.wikipedia.org/wiki/UTF-8#History

      Thompson's original design could handle up to six octets for each letter/symbol, with 31 bits of space:

      * https://www.cl.cam.ac.uk/~mgk25/ucs/utf-8-history.txt

      • gpvos 21 hours ago

        You could even extend UTF-8 to make 0xFE and 0xFF valid starting bytes, with 6 and 7 following bytes each, and get 42 bits of space. I seem to remember Perl allowed that for a while in its v-strings notation.

        Edit: just tested this, Perl still allows this, but with an extra twist: v-notation goes up to 2^63-1. From 2^31 to 2^36-1 is encoded as FE + 6 bytes, and everything above that is encoded as FF + 12 bytes; the largest value it allows is v9223372036854775807, which is encoded as FF 80 87 BF BF BF BF BF BF BF BF BF BF. It probably doesn't allow that one extra bit because v-notation doesn't work with negative integers.

    • cryptonector 14 hours ago

      > It sacrifices the ability to encode more than 21 bits

      No, UTF-8's design can encode up to 31 bits of codepoints. The limitation to 21 bits comes from UTF-16, which was then adopted for UTF-8 too. When UTF-16 dies we'll be able to extend UTF-8 (well, compatibility will be a problem).

    • layer8 a day ago

      That limitation will be trivial to lift once UTF-16 compatibility can be disregarded. This won’t happen soon, of course, given JavaScript and Windows, but the situation might be different in a hundred or thousand years. Until then, we still have a lot of unassigned code points.

      In addition, it would be possible to nest another surrogate-character-like scheme into UTF-16 to support a larger character set.

    • Analemma_ a day ago

      It's always dangerous to stick one's neck out and say "[this many bits] ought to be enough for anybody", but I think it's very unlikely we'll ever run out of UTF-8 sequences. UTF-8 can represent about 1.1 million code points, of which we've assigned about 160,000 actual characters, plus another ~140,000 in the Private Use Area, which won't expand. And that's after getting nearly all of the world's known writing systems: the last several Unicode updates have added a few thousand characters here and there for very obscure and/or ancient writing systems, but those won't go on forever (and things like emojis rarely only get a handful of new code points per update, because most new emojis are existing code points with combining characters).

      If I had to guess, I'd say we'll run out of IPv6 addresses before we run out of unassigned UTF-8 sequences.

      • lyu07282 16 hours ago

        The oldest script in unicode, sumerian cuneiform, is ~5,200 years old, if we were to invent new scripts at the same rate we would hit 1.1 million code points in around 31,000 years. So yeah nothing to worry about, you are absolutely right. Unless we join some intergalactic federation of planets, although they probably already have their own encoding standards we could just adopt.

    • 1oooqooq a day ago

      the limitation tomorrow will be today's implementations, sadly.

  • procaryote a day ago

    > I love when an entity in a position of power is willing to break things in the name of advancement.

    It's less fun when you have things that need to keep working break because someone felt like renaming a parameter, or that a part of the standard library looks "untidy"

    • happytoexplain a day ago

      I agree! And yet I lovingly sacrifice my man-hours to it when I decide to bump that major version number in my dependency manifest.

      • procaryote 13 hours ago

        Or minor versions of python...

        Honestly python is probably one of the worst offender in this as they combine happily making breaking changes for low value rearranging of deck chairs with a dynamic language where you might only find out in runtime.

        The fact that they've also decided to use an unconventional intepretation of minor version shows how little they care.

        • chuckadams 4 hours ago

          The term "semantic versioning" didn't even exist until 2010, which is well after the birth of Python. Sure, it semi-formalized a convention from long before, but it was hardly universal.

  • cryptonector 14 hours ago

    > To be fair, UTF-8 sacrifices almost nothing to achieve backwards compat though.

    There were apps that completely rejected non-7-bit data back in the day. Backwards compatibility wasn't the only point. The point of UTF-8 is more (IMO) that UTF-32 is too bulky, UCS-2 was insufficient, UTF-16 was an abortion, and only UTF-8 could have the right trade-offs.

  • mort96 a day ago

    Yeah I honestly don't know what I would change. Maybe replace some of the control characters with more common characters to save a tiny bit of space, if we were to go completely wild and break Unicode backward compatibility too. As a generic multi byte character encoding format, it seems completely optimal even in isolation.

alberth a day ago

I’ve re-read so many times Joel’s article on Unicode. It’s also very helpful.

https://www.joelonsoftware.com/2003/10/08/the-absolute-minim...

  • mixmastamyk 19 hours ago

    Read that a few times back then as well, but that and other pieces of the day never told you how to actually write a program that supported Unicode. Just facts about it.

    So I went around fixing UnicodeErrors in Python at random, for years, despite knowing all that stuff. It wasn't until I read Batchelder's piece on the "Unicode Sandwich," about a decade later that I finally learned how to write a program to support it properly, rather than playing whack-a-mole.

modeless 20 hours ago

UTF-8 is great and I wish everything used it (looking at you JavaScript). But it does have a wart in that there are byte sequences which are invalid UTF-8 and how to interpret them is undefined. I think a perfect design would define exactly how to interpret every possible byte sequence even if nominally "invalid". This is how the HTML5 spec works and it's been phenomenally successful.

  • cryptonector 6 hours ago

    > But it does have a wart in that there are byte sequences which are invalid UTF-8 and how to interpret them is undefined.

    This is not a wart. And how to interpret them is not undefined -- you're just not allowed to interpret them as _characters_.

    There is right now a discussion[0] about adding a garbage-in/garbage-out mode to jq/jaq/etc that allows them to read and output JSON with invalid UTF-8 strings representing binary data in a way that round-trips. I'm not for making that the default for jq, and you have to be very careful about this to make sure that all the tools you use to handle such "JSON" round-trip the data. But the clever thing is that the proposed changes indeed do not interpret invalid byte sequences as character data, so they stay within the bounds of Unicode as long as your terminal (if these binary strings end up there) and other tools also do the same.

    [0] https://github.com/01mf02/jaq/issues/309

  • moefh 14 hours ago

    > This is how the HTML5 spec works and it's been phenomenally successful.

    Unicode does have a completely defined way to interpret invalid UTF-8 byte sequences by replacing them with the U+FFFD ("replacement character"). You'll see it used (for example) in browsers all the time.

    Mandating acceptance for every invalid input works well for HTML because it's meant to be consumed (primarily) by humans. It's not done for UTF-8 because in some situations it's much more useful to detect and report errors instead of making an automatic correction that can't be automatically detected after the fact.

  • ekidd 20 hours ago

    For security reasons, the correct answer on how process invalid UTF-8 is (and needs to be) "throw away the data like it's radioactive, and return an error." Otherwise you leave yourself wide open to validation bypass attacks at many layers of your stack.

    • modeless 20 hours ago

      This is only true because the interpretation is not defined, so different implementations do different things.

      • cryptonector 6 hours ago

        That's not true. You're just not allowed to interpret them as characters.

twbarr a day ago

It should be noted that the final design for UTF-8 was sketched out on a placemat by Rob Pike and Ken Thompson.

  • hu3 a day ago

    I wonder if that placemat still exists today. It would be such an important piece of computer history.

fleebee a day ago

If you want to delve deeper into this topic and like the Advent of Code format, you're in luck: i18n-puzzles[1] has a bunch of puzzles related to text encoding that drill how UTF-8 (and other variants such as UTF-16) work into your brain.

[1]: https://i18n-puzzles.com/

3pt14159 a day ago

I remember a time before UTF-8's ubiquity. It was such a headache moving to i18z. I love UTF-8.

  • linguae a day ago

    I remember learning Japanese in the early 2000s and the fun of dealing with multiple encodings for the same language: JIS, Shift-JIS, and EUC. As late as 2011 I had to deal with processing a dataset encoded under EUC in Python 2 for a graduate-level machine learning course where I worked on a project for segmenting Japanese sentences (typically there are no spaces in Japanese sentences).

    UTF-8 made processing Japanese text much easier! No more needing to manually change encoding options in my browser! No more mojibake!

    • pezezin 13 hours ago

      I live in Japan and I still receive the random email or work document encoded in Shit-JIS. Mojibake is not as common as it once was, but still a problem.

      • rmunn 11 hours ago

        I'm assuming you misspelled Shift-JIS on purpose because you're sick and tired of dealing with it. If that was an accidental misspelling, it was inspired. :-)

  • acdha 19 hours ago

    I worked on a site in the late 90s which had news in several Asian languages, including both simplified and traditional Chinese. We had a partner in Hong Kong sending articles and being a stereotypical monolingual American I took them at their word that they were sending us simplified Chinese and had it loaded into our PHP app which dutifully served it with that encoding. It was clearly Chinese so I figured we had that feed working.

    A couple of days later, I got an email from someone explaining that it was gibberish — apparently our content partner who claimed to be sending GB2312 simplified Chinese was in fact sending us Big5 traditional Chinese so while many of the byte values mapped to valid characters it was nonsensical.

  • glxxyz a day ago

    I worked on an email client. Many many character set headaches.

senfiaj 4 hours ago

UTF-8 is a nice extension for ASCII from the compatibility point of view, but it might be not the most compact especially if the text is not English like. Also, the variable character length makes it inconvenient to work with strings unless they are parsed/saved into/from 2/4 byte char array.

gnufx 8 hours ago

It's worth noting that Stallman had earlier proposed a design for Emacs "to handle all the world's alphabets and word signs" with similar requirements to UTF-8. That was the etc/CHARACTERS file in Emacs 18.59 (1990). The eventual international support implemented in Emacs 20's MULE was based on ISO-2022, which was a reasonable choice at the time, based on earlier Japanese work. (There was actually enough space in the MULE encoding to add UTF-8, but the implementation was always going to be inefficient with the number of bytes at the top of the code space.)

Edit: see https://raw.githubusercontent.com/tsutsui/emacs-18.59-netbsd...

dotslashmain a day ago

Rob Pike and Ken Thompson are brilliant computer scientists & engineers.

wrp 19 hours ago

I need to call out a myth about UTF-8. Tools built to assume UTF-8 are not backwards compatible with ASCII. An encoding INCLUDES but also EXCLUDES. When a tool is set to use UTF-8, it will process an ASCII stream, but it will not filter out non-ASCII.

I still use some tools that assume ASCII input. For many years now, Linux tools have been removing the ability to specify default ASCII, leaving UTF-8 as the only relevant choice. This has caused me extra work, because if the data processing chain goes through these tools, I have to manually inspect the data for non-ASCII noise that has been introduced. I mostly use those older tools on Windows now, because most Windows tools still allow you to set default ASCII.

  • kccqzy 5 hours ago

    That's not a myth about UTF-8. That's a decision by tools not to support pure ASCII.

bruce511 a day ago

While the backward compatibility of utf-8 is nice, and makes adoption much easier, the backward compatibility does not come at any cost to the elegance of the encoding.

In other words, yes it's backward compatible, but utf-is also compact and elegant even without that.

  • nextaccountic a day ago

    UTF-8 also enables this mindblowing design for small string optimization - if the string has 24 bytes or less it is stored inline, otherwise it is stored on the heap (with a pointer, a length, and a capacity - also 24 bytes)

    https://github.com/ParkMyCar/compact_str

    How cool is that

    (Discussed here https://news.ycombinator.com/item?id=41339224)

    • adgjlsfhk1 a day ago

      How is that UTF8 specific?

      • ubitaco a day ago

        It's slightly buried in the readme on Github:

        > how can we store a 24 byte long string, inline? Don't we also need to store the length somewhere?

        > To do this, we utilize the fact that the last byte of our string could only ever have a value in the range [0, 192). We know this because all strings in Rust are valid UTF-8, and the only valid byte pattern for the last byte of a UTF-8 character (and thus the possible last byte of a string) is 0b0XXXXXXX aka [0, 128) or 0b10XXXXXX aka [128, 192)

Dwedit a day ago

Meanwhile Shift-JIS has a bad design, since the second byte of a character can be any ASCII character 0x40-0x9E. This includes brackets, backslash, caret, backquote, curly braces, pipe, and tilde. This can cause a path separator or math operator to appear in text that is encoded as Shift-JIS but interpreted as plain ASCII.

UTF-8 basically learned from the mistakes of previous encodings which allowed that kind of thing.

danso 6 hours ago

Love reading explorations of structures and technical phenomena that are basically the digital equivalent of oxygen in their ubiquity and in how we take them for granted

billforsternz a day ago

A little off topic but amidst a lot of discussion of UTF-8 and its ASCII compatibility property I'm going to mention my one gripe with ASCII, something I never see anyone talking about, something I've never talked about before: The damn 0x7f character. Such an annoying anomaly in every conceivable way. It would be much better if it was some other proper printable punctuation or punctuation adjacent character. A copyright character. Or a pi character or just about anything other than what it already is. I have been programming and studying packet dumps long enough that I can basically convert hex to ASCII and vice versa in my head but I still recoil at this anomalous character (DELETE? is that what I should call it?) every time.

  • kragen a day ago

    Much better in every way except the one that mattered most: being able to correct punching errors in a paper tape without starting over.

    I don't know if you have ever had to use White-Out to correct typing errors on a typewriter that lacked the ability natively, but before White-Out, the only option was to start typing the letter again, from the beginning.

    0x7f was White-Out for punched paper tape: it allowed you to strike out an incorrectly punched character so that the message, when it was sent, would print correctly. ASCII inherited it from the Baudot–Murray code.

    It's been obsolete since people started punching their tapes on computers instead of Teletypes and Flexowriters, so around 01975, and maybe before; I don't know if there was a paper-tape equivalent of a duplicating keypunch, but that would seem to eliminate the need for the delete character. Certainly TECO and cheap microcomputers did.

  • Agraillo 10 hours ago

    Related: Why is there a “small house” in IBM's Code page 437? (glyphdrawing.club) [1]. There are other interesting articles mentioned in the discussion. m_walden probably would comment here himself

    [1] https://news.ycombinator.com/item?id=43667010

blindriver a day ago

It took time for UTF-8 to make sense. Struggling with how large everything was was a real problem just after the turn of the century. Today it makes more sense because capacity and compute power is much greater but back then it was a huge pain in the ass.

Mikhail_Edoshin 13 hours ago

One aspect of Unicode that is probably not obvious is that with Unicode it is possible to keep using old encodings just fine. You can always get their Unicode equivalents, this is what Unicode was about. Otherwise just keep the data as is, tagged with the encoding. This nicely extends to filesystem "encodings" too.

z_open 2 hours ago

kill Unicode. Done with this after these 25 byte single characters.

zamalek a day ago

Even for varints (you could probably drop the intermediate prefixes for that). There are many examples of using SIMD to decode utf-8, where-as the more common protobuf scheme is known to be hostile to SIMD and the branch predictor.

  • camel-cdr 9 hours ago

    Yeah, protobuf's varint are quite hard to decode with current SIMD instructions, but it would be quite easy, if we get element wise pext/pdep instructions in the future. (SVE2 already has those, but who has SVE2?)

drpixie 12 hours ago

UTF-8 is a neat way of encoding 1M+ code points in 8 bit bytes, and including 7 bit ASCII. If only unicode were as neat - sigh. I guess it's way too late to flip unicode versions and start again avoiding the weirdness.

sawyna a day ago

I have always wondered - what if the utf-8 space is filled up? Does it automatically promote to having a 5th byte? Is that part of the spec? Or are we then talking about utf-16?

  • vishnuharidas a day ago

    UTF-8 can represent up to 1,114,112 characters in Unicode. And in Unicode 15.1 (2023, https://www.unicode.org/versions/Unicode15.1.0/) a total of 149,813 characters are included, which covers most of the world's languages, scripts, and emojis. That leaves a 960K space for future expansion.

    So, it won't fill up during our lifetime I guess.

    • jaza 20 hours ago

      I wouldn't be too quick to jump to that conclusion, we could easily shove another 960k emojis into the spec!

      • BeFlatXIII 5 hours ago

        Black Santa with 1 freckle, Black Santa with 2 freckles…

    • unnouinceput 12 hours ago

      Wait until we get to know another specie then we will not just fill that Unicode space, but we will ditch any utf-16 compatibility so fast that will make your head spin on a snivel.

      Imagine the code points we'll need to represent an alien culture :).

  • crazygringo 18 hours ago

    Nothing is automatic.

    If we ever needed that many characters, yes the most obvious solution would be a fifth byte. The standard would need to be explicitly extended though.

    But that would probably require having encountered literate extraterrestrial species to collect enough new alphabets to fill up all the available code points first. So seems like it would be a pretty cool problem to have.

  • kzrdude a day ago

    utf-8 is just an encoding of unicode. UTF-8 is specified in a way so that it can encode all unicode codepoints up to 0x10FFFF. It doesn't extend further. And UTF-16 also encodes unicode in a similar same way, it doesn't encode anything more.

    So what would need to happen first would be that unicode decides they are going to include larger codepoints. Then UTF-8 would need to be extended to handle encoding them. (But I don't think that will happen.)

    It seems like Unicode codepoints are less than 30% allocated, roughly. So there's 70% free space..

    ---

    Think of these three separate concepts to make it clear. We are effectively dealing with two translations - one from the abstract symbol to defined unicode code point. Then from that code point we use UTF-8 to encode it into bytes.

    1. The glyph or symbol ("A")

    2. The unicode code point for the symbol (U+0041 Latin Capital Letter A)

    3. The utf-8 encoding of the code point, as bytes (0x41)

    • duskwuff 20 hours ago

      As an aside: UTF-8, as originally specified in RFC 2279, could encode codepoints up to U+7FFFFFFF (using sequences of up to six bytes). It was later restricted to U+10FFFF to ensure compatibility with UTF-16.

betimsl 12 hours ago

The story is that Ken and Rob were at a diner when Ken gave structure to it and wrote the initial encode/decode functions on napkins. UTF-8 is so simple yet it required a complex mind to do it.

fmajid a day ago

Well, yes, Ken Thompson, the father of Unix, is behind it.

Mikhail_Edoshin 16 hours ago

I once saw a good byte encoding for Unicode: 7 bit for data, 1 for continuation/stop. This gives 21 bit for data, which is enough for the whole range. ASCII compatible, at most 3 bytes per character. Very simple: the description is sufficient to implement it.

  • rmunn 11 hours ago

    Probably a good idea, but when UTF-8 was designed the Unicode committee had not yet made the mistake of limiting the character range to 21 bits. (Going into why it's a mistake would make this comment longer than it's worth, so I'll only expound on it if anyone asks me to). And at this point it would be a bad idea to switch away from the format that is now, finally, used in over 99% of all documents online. The gain would be small (not zero, but small) and the cost would be immense.

dpc_01234 a day ago

UTF-8 is a undeniably a good answer, but to a relatively simple bit twiddling / variable len integer encoding problem in a somewhat specific context.

I realize that hindsight is 20/20, and time were different, but lets face it: "how to use an unused top bit to best encode larger number representing Unicode" is not that much of challenge, and the space of practical solutions isn't even all that large.

  • Tuna-Fish a day ago

    Except that there were many different solutions before UTF-8, all of which sucked really badly.

    UTF-8 is the best kind of brilliant. After you've seen it, you (and I) think of it as obvious, and clearly the solution any reasonable engineer would come up with. Except that it took a long time for it to be created.

  • ivanjermakov a day ago

    I just realised that all latin text is wasting 12% of storage/memory/bandwidth with MSB zero. At least is compresses well. Are there any technology that utilizes 8th bit for something useful, e.g. error checking?

nottorp 10 hours ago

Hmm i count at most 21 bits. Just 2 billion code points.

Is that all Unicode can do? How are they going to fit all the emojis in?

  • danhau 9 hours ago

    The max code point in Unicode is 0x10FFFF. ceil(log2(0x10FFFF+1)) = 21. So yes, a Unicode codepoint requires only 21 bits.

    297334 codepoints have been assigned so far, that‘s about 1/4 of the available range, if my napkin math is right. Plenty of room for more emoji.

mikelabatt 21 hours ago

Nice article, thank you. I love UTF-8, but I only advocate it when used with a BOM. Otherwise, an application may have no way of knowing that it is UTF-8, and that it needs to be saved as UTF-8.

Imagine selecting New/Text Document in an environment like File Explorer on Windows: if the initial (empty) file has a BOM, any app will know that it is supposed to be saved again as UTF-8 once you start working on it. But with no BOM, there is no such luck, and corruption may be just around the corner, even when the editor tries to auto-detect the encoding (auto-detection is never easy or 100% reliable, even for basic Latin text with "special" characters)

The same can happen to a plain ASCII file (without a BOM): once you edit it, and you add, say, some accented vowel, the chaos begins. You thought it was Italian, but your favorite text editor might conclude it's Vietnamese! I've even seen Notepad switch to a different default encoding after some Windows updates.

So, UTF-8 yes, but with a BOM. It should be the default in any app and operating system.

  • rmunn 11 hours ago

    The fact that you advocate using a BOM with UTF-8 tells me that you run Windows. Any long-term Unix user has probably seen this error message before (copy and pasted from an issue report I filed just 3 days ago):

        bash: line 1:  #!/bin/bash: No such file or directory
    
    If you've got any experience with Linux, you probably suspect the problem already. If your only experience is with Windows, you might not realize the issue. There's an invisible U+FEFF lurking before the `#!`. So instead of that shell script starting with the `#!` character pair that tells the Linux kernel "The application after the `#!` is the application that should parse and run this file", it actually starts with `<FEFF>#!`, which has no meaning to the kernel. The way this script was invoked meant that Bash did end up running the script, with only one error message (because the line did not start with `#` and therefore it was not interpreted as a Bash comment) that didn't matter to the actual script logic.

    This is one of the more common problems caused by putting a BOM in UTF-8 files, but there are others. The issue is that adding a BOM, as can be seen here, *breaks the promise of UTF-8*: that a UTF-8 file that contains only codepoints below U+007F can be processed as-is, and legacy logic that assumes ASCII will parse it correctly. The Linux kernel is perfectly aware of UTF-8, of course, as is Bash. But the kernel logic that looks for `#!`, and the Bash logic that look for a leading `#` as a comment indicator to ignore the line, do *not* assume a leading U+FEFF can be ignored, nor should they (for many reasons).

    What should happen is that these days, every application should assume UTF-8 if it isn't informed of the format of the file, unless and until something happens to make it believe it's a different format (such as reading a UTF-16 BOM in the first two bytes of the file). If a file fails to parse as UTF-8 but there are clues that make another encoding sensible, reparsing it as something else (like Windows-1252) might be sensible.

    But putting a BOM in UTF-8 causes more problems than it solves, because it *breaks* the fundamental promise of UTF-8: ASCII compatibility with Unicode-unaware logic.

    • mikelabatt 5 hours ago

      I like your answer, and the others too, but I suspect I have an even worse problem than running Windows: I am an Amiga user :D

      The Amiga always used all 8 bits (ISO-8859-1 by default), so detecting UTF-8 without a BOM is not so easy, especially when you start with an empty file, or in some scenario like the other one I mentioned.

      And it's not that Macs and PCs don't have 8-bit legacy or coexistence needs. What you seem to be saying is that compatibility with 7-bit ASCII is sacred, whereas compatibility with 8-bit text encodings is not important.

      Since we now have UTF-8 files with BOMs that need to be handled anyway, would it not be better if all the "Unicode-unaware" apps at least supported the BOM (stripping it, in the simplest case)?

      • rmunn 3 hours ago

        "... would it not be better if all the "Unicode-unaware" apps at least supported the BOM (stripping it, in the simplest case)?"

        What that question means is that the Unicode-unaware apps would have to become Unicode-aware, i.e. be rewritten. And that would entirely defeat the purpose of backwards-compatibility with ASCII, which is the fact that you don't have to rewrite 30-year-old apps.

        With UTF-16, the byte-order mark is necessary so that you can tell whether uppercase A will be encoded 00 41 or 41 00. With UTF-8, uppercase A will always be encoded 41 (hex, or 65 decimal) so the byte-order mark serves no purpose except to signal "This is a UTF-8 file". In an environment where ISO-8859-1 is ubiquitous, such as the Web fifteen years ago, the signal "Hey, this is a UTF-8 file, not ISO-8859-1" was useful, and its drawbacks (BOM messing up certain ASCII-era software which read it as a real character, or three characters, and gave a syntax error) cost less then the benefits. But now that more than 99% of files you'll encounter on the Web are UTF-8, that signal is useful less than 1% of the time, and so the costs of the BOM are now more expensive than the benefits (in fact, by now they are a lot more expensive than the benefits).

        As you can see from the paragraph above, you're not reading me quite right when you say that I "seem to be saying is that compatibility with 7-bit ASCII is sacred, whereas compatibility with 8-bit text encodings is not important". Compatibility with 8-bit text encodings WAS important, precisely because they were ubiquitous. It IS no longer important in a Web context, for two reasons. First, because they are less than 1% of documents and in the contexts where they do appear, there are ways (like HTTP Content-Encoding headers or HTML charset meta tags) to inform parsers of what the encoding is. And second, because UTF-8 is stricter than those other character sets and thus should be parsed first.

        Let me explain that last point, because it's important in a context like Amiga, where (as I understand you to be saying) ISO-8859-1 documents are still prevalent. If you have a document that is actually UTF-8, but you read it as ISO-8859-1, it is 100% guaranteed to parse without the parser throwing any "this encoding is not valid" errors, BUT there will be mistakes. For example, å will show up as Ã¥ instead of the å it should have been, because å (U+00E5) encodes in UTF-8 as 0xC3 0xA5. In ISO-8859-1, 0xC3 is à and 0xA5 is ¥. Or ç (U+00E7), which encodes in UTF-8 as 0xC3 0xA7, will show up in ISO-8859-1 as ç because 0xA7 is §.

        (As an aside, I've seen a lot of UTF-8 files incorrectly parsed as Latin-1 / ISO-8859-1 in my career. By now, if I see à followed by at least one other accented Latin letter, I immediately reach for my "decode this as Latin-1 and re-encode it as UTF-8" Python script without any further investigation of the file, because that Ã, 0xC3, is such a huge clue. It's already rare in European languages, and the chances of it being followed by ¥ or § or indeed any other accented character in any real legacy document are so vanishingly small as to be nearly non-existent. This comment, where I'm explicitly citing it as an example of misparsing, is actually the only kind of document where I would ever expect to see the sequence ç as being what the author actually intended to write).

        Okay, so we've established that a file that is really UTF-8, but gets incorrectly parsed as ISO-8859-1, will NOT cause the parser to throw out any error messages, but WILL produce incorrect results. But what about the other way around? What about a file that's really ISO-8859-1, but that you incorrectly try to parse as UTF-8? Well, NEARLY all of the time, the ISO-8859-1 accented characters found in that file will NOT form a correct UTF-8 sequence. In 99.99% (and I'm guessing you could end up with two or three more nines in there) of actual ISO-8859-1 files designed for human communication (as opposed to files deliberately designed to be misparsed), you won't end up with a combination of accented Latin characters that just happen to match a valid UTF-8 sequence, and it's basically impossible for ALL the accents in an ISO-8859-1 document to just so happen to be valid UTF-8 sequences. In theory it could happen, but your chances of being struck by a 10-kg meteorite while sitting at your computer are better than of that happening by chance. (Again, I'm excluding documents deliberately designed with malice aforethought, because that's not the main scenario here). Which means that if you parse that unknown file as UTF-8 and it wasn't UTF-8, your parser will throw out an error message.

        So when you encounter an unknown file, that has a 90% chance of being ISO-8859-1 and a 10% chance of being UTF-8, you might think "Then I should try parsing it in ISO-8859-1 first, since that has a 90% chance of being right, and if it looks garbled then I'll reparse it". But "if it looks garbled" needs human judgment. There's a better way. Parse it in UTF-8 first, in strict mode where ANY encoding error makes the entire parse be rejected. Then if the parse is rejected, re-parse it in ISO-8859-1. If the UTF-8 parser parses it without error, then either it was an ISO-8859-1 file with no accents at all (all characters 0x7F or below, so that the UTF-8 encoding and the ISO-8859-1 encoding are identical and therefore the file was correctly parsed), or else it was actually a UTF-8 file and it was correctly parsed. If the UTF-8 parser rejects the file as having invalid byte sequences, then parse it as the 8-bit encoding that is most likely in your context (for you that would be ISO-8859-1, for the guy in Japan who commented it would likely be Shift-JIS that he should try next, and so on).

        That logic is going to work nearly 100% of the time, so close to 100% that if you find a file it fails on, you had better odds of winning the lottery. And that logic does not require a byte-order mark; it just requires realizing that UTF-8 is a rather strict encoding with a high chance of failing if it's asked to parse files that are actually from a different legacy 8-bit encoding. And that is, in fact, one of UTF-8's strengths (one guy elsewhere in this discussion thought that was a weakness of UTF-8) precisely because it means it's safe to try UTF-8 decoding first if you have an unknown file where nobody has told you the encoding. (E.g., you don't have any HTTP headers, HTML meta tags, or XML preambles to help you).

        NOW. Having said ALL that, if you are dealing with legacy software that you can't change which is expecting to default to ISO-8859-1 encoding in the absence of anything else, then the UTF-8 BOM is still useful in that specific context. And you, in particular, sound like that's the case for you. So go ahead and use a UTF-8 BOM; it won't hurt in most cases, and it will actually help you. But MOST of the world is not in your situation; for MOST of the world, the UTF-8 BOM causes more problems than it solves. Which is why the default for ALL new software should be to try parsing UTF-8 first if you don't know what the encoding is, and try other encodings only if the UTF-8 parse fails. And when writing a file, it should always be UTF-8 without BOM unless the user explicitly requests something else.

    • 3036e4 9 hours ago

      Also some XML parsers I used choked on UTF-8 BOMs. Not sure if valid XML is allowed to have anything other than clean ASCII in the first few characters before declaring what the encoding is?

  • taffer 8 hours ago

    I respectfully disagree. The BOM is a Windows-specific idiosyncrasy resulting from its early adoption of UTF-16. In the Unix world, a BOM is unexpected and causes problems with many programs, such as GCC, PHP and XML parsers. Don't use it!

    The correct approach is to use and assume UTF-8 everywhere. 99% of websites use UTF-8. There is no reason to break software by adding a BOM.

  • Cloudef 19 hours ago

    BOM is awful as it breaks concatenation. In modern world everything should be just assumed to be UTF8 by default.

  • cryptonector 6 hours ago

    You do not need a BOM for UTF-8. Ever. Byte order issues are not a problem for UTF-8 because UTF-8 is manipulated as a string of _bytes_, not as a string of 16-bit or 32-bit code units.

    You _do_ need a BOM for UTF-16 and UTF-32.

max23_ 10 hours ago

Good read, thank you!

> Show the character represented by the remaiing 7 bits on the screen.

I notice there is a typo.

cyberax a day ago

UTF-8 is simply genius. It entirely obviated the need for clunky 2-byte encodings (and all the associated nonsense about byte order marks).

The only problem with UTF-8 is that Windows and Java were developed without knowledge about UTF-8 and ended up with 16-bit characters.

Oh yes, and Python 3 should have known better when it went through the string-bytes split.

  • wrs a day ago

    UTF-16 made lots of sense at the time because Unicode thought "65,536 characters will be enough for anybody" and it retains the 1:1 relationship between string elements and characters that everyone had assumed for decades. I.e., you can treat a string as an array of characters and just index into it with an O(1) operation.

    As Unicode (quickly) evolved, it turned out not that only are there WAY more than 65,000 characters, there's not even a 1:1 relationship between code points and characters, or even a single defined transformation between glyphs and code points, or even a simple relationship between glyphs and what's on the screen. So even UTF-32 isn't enough to let you act like it's 1980 and str[3] is the 4th "character" of a string.

    So now we have very complex string APIs that reflect the actual complexity of how human language works...though lots of people (mostly English-speaking) still act like str[3] is the 4th "character" of a string.

    UTF-8 was designed with the knowledge that there's no point in pretending that string indexing will work. Windows, MacOS, Java, JavaScript, etc. just missed the boat by a few years and went the wrong way.

    • rowls66 a day ago

      I think more effort should have been made to live with 65,536 characters. My understanding is that codepoints beyond 65,536 are only used for languages that are no longer in use, and emojis. I think that adding emojis to unicode is going to be seen a big mistake. We already have enough network bandwith to just send raster graphics for images in most cases. Cluttering the unicode codespace with emojis is pointless.

      • jasonwatkinspdx a day ago

        You are mistaken. Chinese Hanzi and the languages that derive from or incorporate them require way more than 65,536 code points. In particular a lot of these characters are formal family or place names. USC-2 failed because it couldn't represent these, and people using these languages justifiably objected to having to change how their family name is written to suit computers, vs computers handling it properly.

        This "two bytes should be enough" mistake was one of the biggest blind spots in Unicode's original design, and is cited as an example of how standards groups can have cultural blind spots.

        • duskwuff 20 hours ago

          UTF-16 also had a bunch of unfortunate ramifications on the overall design of Unicode, e.g. requiring a substantial chunk of BMP to be reserved for surrogate characters and forcing Unicode codepoints to be limited to U+10FFFF.

      • gred a day ago

        > My understanding is that codepoints beyond 65,536 are only used for languages that are no longer in use, and emojis

        This week's Unicode 17 announcement [1] mentions that of the ~160k existing codepoints, over 100k are CJK codepoints, so I don't think this can be true...

        [1] https://blog.unicode.org/2025/09/unicode-170-release-announc...

      • duskwuff a day ago

        Your understanding is incorrect; a substantial number of the ranges allocated outside BMP (i.e. above U+FFFF) are used for CJK ideographs which are uncommon, but still in use, particularly in names and/or historical texts.

      • mort96 a day ago

        The silly thing is, lots of emoji these days aren't even a single code point. So many emoji these days are two other code points combined with a zero width joiner. Surely we could've introduced one code point which says "the next code point represents an emoji from a separate emoji set"?

        • wongarsu 11 hours ago

          With that approach you could no longer look at a single code point and decide if it's e.g. a space. You would always have to look back at the previous code point to see if you are now in the emoji set. That would bring its own set of issues for tools like grep.

          But what if instead of emojis we take the CJK set and make it more compositional. Instead of >100k characters with different glyphs we could have defined a number of brush stroke characters and compositional characters (like "three of the previous character in a triangle formation). We could still make distinct code points for the most common couple thousand characters, just like ä can be encoded as one code point or two (umlaut dots plus a).

          Alas, in the 90s this would have been seen as too much complexity

      • dudeinjapan a day ago

        CJK unification (https://en.wikipedia.org/wiki/CJK_Unified_Ideographs) i.e. combining "almost same" Chinese/Japanese/Korean characters into the same codepoint, was done for this reason, and we are now living with the consequence that we need to load separate Traditional/Simplified Chinese, Japanese, and Korean fonts to render each language. Total PITA for apps that are multi-lingual.

        • mort96 a day ago

          This feels like it should be solveable with introducing a few more marker characters, like one code point representing "the following text is traditional Chinese", "the following text is Japanese", etc? It would add even more statefulness to Unicode, but I feel like that ship has already sailed with the U+202D LEFT-TO-RIGHT OVERRIDE and U+202E RIGHT-TO-LEFT OVERRIDE characters...

      • daneel_w a day ago

        I entirely agree that we could've cared better for the leading 16 bit space. But protocol-wise adding a second component (images) to the concept of textual strings would've been a terrible choice.

        The grande crime was that we squandered the space we were given by placing emojis outside the UTF-8 specification, where we already had a whooping 1.1 million code points at our disposal.

        • duskwuff 20 hours ago

          > The grande crime was that we squandered the space we were given by placing emojis outside the UTF-8 specification

          I'm not sure what you mean by this. The UTF-8 specification was written long before emoji were included in Unicode, and generally has no bearing on what characters it's used to encode.

  • wongarsu a day ago

    Yeah, Java and Windows NT3.1 had really bad timing. Both managed to include Unicode despite starting development before the Unicode 1.0 release, but both added unicode back when Unicode was 16 bit and the need for something like UTF-8 was less clear

  • KerrAvon 21 hours ago

    NeXTstep was also UTF-16 through OpenStep 4.0, IIRC. Apple was later able to fix this because the string abstraction in the standard library was complete enough no one actually needed to care about the internal representation, but the API still retains some of the UTF-16-specific weirdnesses.

digianarchist 17 hours ago

I read online that codepoints are formatted with 4 hex chars for historical reasons. U+41 (Latin A) is formatted as U+0041.

anthonyiscoding a day ago

UTF-8 contributors are some of our modern day unsung heroes. The design is brilliant but the dedication to encode every single way humans communicate via text into a single standard, and succeed at it, is truly on another level.

Most other standards just do the xkcd thing: "now there's 15 competing standards"

smoyer 18 hours ago

Uvarint also has the property of a file containing only ascii characters still being a valid ascii file.

kevincox 20 hours ago

> Every ASCII encoded file is a valid UTF-8 file.

More importantly, that file has the same meaning. Same with the converse.

sheerun a day ago

I'll mention IPv6 as bad design that could have been potentially UTF-8-like success story

  • tialaramex a day ago

    No. UTF-8 is for encoding text, so we don't need to care about it being variable length because text was already variable length.

    The network addresses aren't variable length, so if you decide "Oh IPv6 is variable length" then you're just making it worse with no meaningful benefit.

    The IPv4 address is 32 bits, the IPv6 address is 128 bits. You could go 64 but it's much less clear how to efficiently partition this and not regret whatever choices you do make in the foreseeable future. The extra space meant IPv6 didn't ever have those regrets.

    It suits a certain kind of person to always pay $10M to avoid the one-time $50M upgrade cost. They can do this over a dozen jobs in twenty years, spending $200M to avoid $50M cost and be proud of saving money.

sjapkee 10 hours ago

Until you interact with it as a programmer

jrochkind1 18 hours ago

It really is, in so many ways.

It is amazing how successful it's been.

quotemstr a day ago

Great example of a technology you get from a brilliant guy with a vision and that you'll never get out of a committee.

carlos256 a day ago

No, it's not. It's just a form of Elias-Gamma coding.

  • carlos256 a day ago

    * unary encoding coding.

ofou 19 hours ago

UTF-8 should be a universal tokenizer

transfire 9 hours ago

So brilliant that we’re all still using ASCII!†

† With an occasional UNICODE flourish.

gritzko 13 hours ago

I specialize in protocol design, unfortunately. A while ago I had to code some Unicode conversion routines from scratch and I must say I absolutely admire UTF-8. Unicode per se is a dumpster fire, likely because of objective reasons. Dealing with multiple Unicode encodings is a minefield. I even made an angry write-up back then https://web.archive.org/web/20231001011301/http://replicated...

UTF-8 made it all relatively neat back in the day. There are still ways to throw a wrench into the gears. For example, how do you handle UTF-8 encoded surrogate pairs? But at least one can filter that out as suspicious/malicious behavior.

  • sedatk 11 hours ago

    > For example, how do you handle UTF-8 encoded surrogate pairs?

    Surrogate pairs aren’t applicable to UTF-8. That part of Unicode block is just invalid for UTF-8 and should be treated as such (parsing error or as invalid characters etc).

  • cryptonector 5 hours ago

    > Unicode per se is a dumpster fire

    Maybe as to emojis, but otherwise, no, Unicode is not a dumpster fire. Unicode is elegant, and all the things that people complain about in Unicode are actually problems in human scripts.

akoboldfrying 12 hours ago

I take it you could choose to encode a code point using a larger number of bytes than are actually needed? E.g., you could encode "A" using 1, 2, 3 or 4 bytes?

Because if so: I don't really like that. It would mean that "equal sequence of code points" does not imply "equal sequence of encoded bytes" (the converse continues to hold, of course), while offering no advantage that I can see.

Andrex 15 hours ago

What are the perceived benefits of UTF-16 and 32 and why did they come about?

I could ask Gemini but HN seems more knowledgeable.

  • peterfirefly 6 hours ago

    UTF-16 is a hack that was invented when it became clear that UCS-2 wasn't gonna work (65536 codepoints was not enough for everybody).

    Almost the entire world could have ignored it if not for Microsoft making the wrong choice with Windows NT and then stubbornly insisting that their wrong choice was indeed correct for a couple of decades.

    There was a long phase where some parts of Windows understood (and maybe generated) UTF-16 and others only UCS-2.

    • kccqzy 4 hours ago

      Besides Microsoft, plenty of others thought UTF-16 to be a good idea. The Haskell Text type used to be based on UTF-16; it only switched to UTF-8 a few years ago. Java still uses UTF-16, but with an ad hoc optimization called CompactStrings to use ISO-8859-1 where possible.

      • peterfirefly 4 hours ago

        A lot of them did it because they had to have a Windows version and had to interface with Windows APIs and Windows programs that only spoke UTF-16 (or UCS-2 or some unspecified hybrid).

        Java's mistake seems to have been independent and it seems mainly to have been motivated by the mistaken idea that it was necessary to index directly into strings. That would have been deprecated fast if Windows had been UTF-8 friendly and very fast if it had been UTF-16 hostile.

        We can always dream.

ummonk a day ago

> Another one is the ISO/IEC 8859 encodings are single-byte encodings that extend ASCII to include additional characters, but they are limited to 256 characters.

ISO 2022 allowed you to use control codes to switch between ISO 8859 character sets though, allowing for mixed script text streams.

xkcd1963 10 hours ago

What I find inconvenient about emoji characters is the differential length counting in programming languages

  • kccqzy 4 hours ago

    That's a problem with programming languages having inconsistent definitions of length. They could be like Swift where the programmer has control over what counts as length one. Or they could decide that the problem shouldn't be solved by the language but by libraries like ICU.

lyu07282 17 hours ago

UTF-8 was a huge improvement for sure, but I was, 20-25 years ago, working with LATIN-1 (so 8 bit charcters) which was a struggle in the years it took for everything to switch to UTF-8, the compatibility with ASCII meant you only really notice something was wrong when the data had special characters not representable in ASCII but valid LATIN-1. So perhaps breaking backwards compatibility would've resulted in less data corruption overall.

tiahura a day ago

How many llm tokens are wasted everyday resolving utf issues?

Androth 20 hours ago

meh. it's a brilliant design to put a bandage over a bad design. if a language can't fit into 255 glyphs, it should be reinvented.

  • rmunn 11 hours ago

    Sun Tzu would like a word or two with you.

LorenPechtel a day ago

Now fix fonts! It should be possible to render any valid string in a font.

dmz73 16 hours ago

UTF8 is a horrible design. The only reason it was widely adopted was backwards compatibility with ASCII. There are large number of invalid byte combinations that have to be discarded. Parsing forward is complex even before taking invalid byte combinations in account and parsing backwards is even worse. Compare that to UTF16 where parsing forward and backwards are simpler than UTF8 and if there is invalid surrogate combination, one can assume it is valid UCS2 char.

  • moefh 15 hours ago

    UTF-16 is an abomination. It's only easy to parse because it's artificially limited to 1 or 2 code units. It's an ugly hack that requires reserving 2048 code points ("surrogates") from the Unicode table just for the encoding itself.

    It's also the reason why Unicode has a limit of about 1.1 million code points: without UTF-16, we could have over 2 billion (which is the UTF-8 limit).