KX Community

Find answers, ask questions, and connect with our KX Community around the world.
KX Community Guidelines

Home Forums kdb+ Reloading HDB causes "Cannot allocate memory" error

  • Reloading HDB causes "Cannot allocate memory" error

    Posted by clivelo on November 7, 2023 at 12:00 am

    Hi, we are running an HDB instance that loads from a partitioned database on the disk. To reload the indexing of the HDB after updating the database, we run the following:

    l .

    This was working in the past but recently it has been running into the error “OS reports: Cannot allocate memory”. In theory, the HDB will only index the database so it should not consume too much memory. Besides, the HDB can successfully load if we simply restart the Q instance, but only fails if we run the above reload command.

     

    Any idea what is causing this?

    clivelo replied 8 months, 2 weeks ago 2 Members · 7 Replies
  • 7 Replies
  • gyorokpeter-kx

    Member
    November 7, 2023 at 12:00 am

    Has your database grown in size such that it now goes over some threshold?

    Alternatively some files in your HDB might be corrupted. A file with a corrupted size field might result in an attempt to allocate a very large amount of memory.

  • clivelo

    Member
    November 7, 2023 at 12:00 am

    Thanks for the reply.

    Unless the reload command l . loads the database differently than just restarting the Q instance, I don’t think it would be due to size. The HDB is only using around 350 MB of RAM.

    With corrupted file, is there a way to check whether the file is corrupted? When the error pops up, it also associates it with a file in the database that it’s unable to allocate memory to. But if I run a query to view the data in the slice of the database, I can view the data without issues.

  • gyorokpeter-kx

    Member
    November 7, 2023 at 12:00 am

    What is your kdb+ version? Older versions have a limit on the number of nested columns, although in that case the error would usually be “too many open files”.

    Does the load always complain about the same file or a different one every time? If it’s the same, can you try making a backup and overwriting it with a column full of nulls with the correct size to see if the load progresses past it next time?

  • clivelo

    Member
    November 7, 2023 at 12:00 am

    I am running version 3.6.

    The error is not always the same file. Strangely if I save more data into the database, the data that it complains about seems to shift back; otherwise with the amount of data unchanged, it complains about the same file.

    I have tried overwriting the problem data with nulls of the exact same size and it’s still throwing the same error on the same file.

    Is there an alternative command to running l . that can reload indexing the HDB?

  • gyorokpeter-kx

    Member
    November 7, 2023 at 12:00 am

    Have you tried stracing the q process to see what is happening just before the error is printed?

    l calls the .Q.l function to load the database, so you could try stepping through that if you are comfortable with reading k code.

  • clivelo

    Member
    November 8, 2023 at 12:00 am

    I ran .Q.w[] on the HDB and noticed that the mmap is 98GB. AFAIK, shouldn’t mmap generally be at 0 since .Q.l loads the database in deferred mapping? Wondering if our setup is causing an issue.

    I don’t know k code at all, but will try to walk through it.

  • clivelo

    Member
    November 21, 2023 at 12:00 am

    Replying my solution in case someone else is experiencing this.

    I created a tiny subset of the data and tested in a barebone environment. Although the error didn’t occur, the mmap value was still non-zero (all HDBs I looked at had 0 mmap). As such, I suspected it was related to our setup of HDB that consumes mmap, and as the data grows, it will throw that error.

    Turns out we are using a segmented database and the segments are listed using a par.txt file. Importantly, the par.txt file must be in a standalone directory, but we put it in the root of the segmented database. Basically, when we loaded the HDB, it loaded the entire segmented database directly.

    https://code.kx.com/q4m3/14_Introduction_to_Kdb%2B/#144-segmented-tables

    The above should provide a good example of how the par.txt should be in a standalone folder, and how a segmented database should be structured.

Log in to reply.