Everything old is new again: memory optimization

(nibblestew.blogspot.com)

52 points | by ibobev 3 days ago

8 comments

  • dgb23 5 minutes ago
    Not a C++ programmer and I think the solution is neat.

    But it's not necessarily an apples to apples comparison. It's not unfair to python because of the runtime overhead. It's unfair because it's a different algorithm with fundamentally different memory characteristics.

    A fairer comparison would be to stream the file in C++ as well and maintain internal state for the count. For most people that would be the first/naive approach as well when they programmed something like this I think. And it would showcase what the actual overhead of the python version is.

  • canpan 39 minutes ago
    String views were a solid addition to C++. Still underutilized. It does not matter which language you are using when you make thousands of tiny memory allocations during parsing. https://en.cppreference.com/w/cpp/string/basic_string_view.h...
    • VorpalWay 3 minutes ago
      The issue with retrofitting things to an existing well established language is that those new features will likely be underutilized. Especially in other existing parts of the standard library, since changing those would break backwards compatibly. std::optional is another example of this, which is not used much in the c++ standard library, but would be much more useful if used across the board.

      Contrast this with Rust, which had the benefit of being developed several decades later. Here Option and str (string views) were in the standard library from the beginning, and every library and application uses them as fundamental vocabulary types. Combined with good support for chaining and working with these types (e.g. Option has map() to replace the content if it exists and just pass it along if None).

      Retrofitting is hard, and I have no doubt there will be new ideas that can't really be retrofitted well into Rust in another decade or two as well. Hopefully at that point something new will come along that learned from the mistakes of the past.

    • pjc50 6 minutes ago
      C# gained similar benefits with Span<>/ReadOnlySpan<>. Essential for any kind of fast parser.
  • tzot 1 hour ago
    Well, we can use memoryview for the dict generation avoiding creation of string objects until the time for the output:

        import re, operator
        def count_words(filename):
            with open(filename, 'rb') as fp:
                data= memoryview(fp.read())
            word_counts= {}
            for match in re.finditer(br'\S+', data):
                word= data[match.start(): match.end()]
                try:
                    word_counts[word]+= 1
                except KeyError:
                    word_counts[word]= 1
            word_counts= sorted(word_counts.items(), key=operator.itemgetter(1), reverse=True)
            for word, count in word_counts:
                print(word.tobytes().decode(), count)
    
    We could also use `mmap.mmap`.
  • fix4fun 53 minutes ago
    Digression: Nowadays when RAM is expensive good old zram is gaining popularity ;) Try to check on trends.google.com . Since 2025-09 search for it doubled ;)
  • griffindor 1 hour ago
    Nice!

    > Peak memory consumption is 1.3 MB. At this point you might want to stop reading and make a guess on how much memory a native code version of the same functionality would use.

    I wish I knew the input size when attempting to estimate, but I suppose part of the challenge is also estimating the runtime's startup memory usage too.

    > Compute the result into a hash table whose keys are string views, not strings

    If the file is mmap'd, and the string view points into that, presumably decent performance depends on the page cache having those strings in RAM. Is that included in the memory usage figures?

    Nonetheless, it's a nice optimization that the kernel chooses which hash table keys to keep hot.

    The other perspective on this is that we sought out languages like Python/Ruby because the development cost was high, relative to the hardware. Hardware is now more expensive, but development costs are cheaper too.

    The take away: expect more push towards efficiency!

    • pjc50 58 minutes ago
      >> Peak memory consumption is 1.3 MB. At this point you might want to stop reading and make a guess on how much memory a native code version of the same functionality would use.

      At this point I'd make two observations:

      - how big is the text file? I bet it's a megabyte, isn't it? Because the "naive" way to do it is to read the whole thing into memory.

      - all these numbers are way too small to make meaningful distinctions. Come back when you have a gigabyte. It gets more interesting when the file doesn't fit into RAM at all.

      The state of the art here is : https://nee.lv/2021/02/28/How-I-cut-GTA-Online-loading-times... , wherein our hero finds the terrible combination of putting the whole file in a single string and then running strlen() on it for every character.

      • dgb23 21 minutes ago
        > all these numbers are way too small to make meaningful distinctions. Come back when you have a gigabyte.

        I have to disagree. Bad performance is often a result of a death of a thousands cuts. This function might be one among countless similarly inefficient library calls, programs and so on.

  • est 1 hour ago
    I think py version can be shortened as:

    from collections import Counter

    stats = Counter(x.strip() for l in open(sys.argv[1]) for x in l)

    • voidUpdate 1 hour ago
      Would that decrease memory usage though?
  • gostsamo 26 minutes ago
    > how much memory a native code version of the same functionality would use.

    native to what? how c++ is more native than python?

  • biorach 1 hour ago
    "copyright infringement factories"
    • maipen 20 minutes ago
      Tells you right away where this is coming from.