-
Parted attribute, memory consumption
Would it be feasible to develop an alternative method for applying a parted attribute to an on-disk table column that constrains the memory usage to not exceed a given limit? Slower to perform, but guaranteed to be memory constrained.
At EOD, the other typical source of memory spikes can be from on-disk table sorting, but you can at least avoid those memory spikes by writing data to an int partitioned database intraday and running a merge job to stitch it back together at midnight, column-by-column appending if needs be. This is a feature present in Kx products and in the DataIntellect TorQ package, but applying the parted attribute on the merged table, at the very last step, still catches you on memory because you need to read in the whole column (I’m not sure how it works exactly, but the memory usage seems proportional to the size of the column file).
Log in to reply.