KX Community

Find answers, ask questions, and connect with our KX Community around the world.

Home Forums kdb+ Parted attribute, memory consumption

  • Parted attribute, memory consumption

    Posted by jlucid on March 25, 2024 at 4:29 pm

    Would it be feasible to develop an alternative method for applying a parted attribute to an on-disk table column that constrains the memory usage to not exceed a given limit? Slower to perform, but guaranteed to be memory constrained.

    At EOD, the other typical source of memory spikes can be from on-disk table sorting, but you can at least avoid those memory spikes by writing data to an int partitioned database intraday and running a merge job to stitch it back together at midnight, column-by-column appending if needs be. This is a feature present in Kx products and in the DataIntellect TorQ package, but applying the parted attribute on the merged table, at the very last step, still catches you on memory because you need to read in the whole column (I’m not sure how it works exactly, but the memory usage seems proportional to the size of the column file).

    mike-simo replied 4 weeks, 1 day ago 2 Members · 1 Reply
  • 1 Reply
  • mike-simo

    Member
    March 27, 2024 at 1:10 pm

    Doesn’t seem achievable within q itself, as it currently stands.

    Would likely require some highly customized code under the hood.

    p attr underneath is essentially:

    (`#x;`u#x i;(i:&~=':x),#x)

    quite expensive structure as need to identify (data;uniques;breaks)

    • This reply was modified 4 weeks, 1 day ago by  Laura.
    • This reply was modified 4 weeks, 1 day ago by  Laura.
    • This reply was modified 4 weeks, 1 day ago by  Laura.
    • This reply was modified 4 weeks, 1 day ago by  Laura.
    • This reply was modified 4 weeks, 1 day ago by  Laura.
    • This reply was modified 4 weeks, 1 day ago by  Laura.
    • This reply was modified 4 weeks, 1 day ago by  Laura.

Log in to reply.