KX Community

Find answers, ask questions, and connect with our KX Community around the world.
KX Community Guidelines

Home Forums kdb+ Faster json converter

  • Faster json converter

    Posted by jlucid on October 4, 2024 at 7:47 pm

    Hi all,

    Anyone aware of a library I can use for converting K objects to json and vice versa?

    Looking for the equivalent of .j.j and .j.k but faster.


    I’ve tried out the qrapidjson library, below, which provides a function for converting K to json, and its about 3-4 times faster on my machine. The github reports it being 48x faster, but to be fair that was 8 years ago, and using an older version of kdb+. I have been testing with v4.1


    I’d like to have a faster converted for json to K also, if anyone knows where I could get that? Thanks


    https://github.com/lmartinking/qrapidjson%5BBR_PLACEHOLDER%5D


    jlucid replied 1 month, 1 week ago 1 Member · 1 Reply
  • 1 Reply
  • jlucid

    Member
    October 8, 2024 at 10:34 am

    I started by creating json specific message converters using the same library and that worked very well. But then I just asked chatGPT o1-preview to write the equivalent “fromjson” general converter for me, and it worked first time. I’m shocked. I specified to convert numbers without a decimal place to floats just as .j.k does. Attached are some initial results in screenshot. Same output for test json message and faster. All that’s left to do now is submit it to gihub and claim it as my own work :). After all, I was the one who said “do this for me now please”

    • This reply was modified 1 month, 1 week ago by  jlucid.
  • jlucid

    Member
    October 21, 2024 at 8:14 pm

    Here is the code generated by LLMs: https://github.com/jlucid/kjson

    The original starting point was the qrapidjson library, which provided K to JSON conversion. Most of that code remains unchanged. The main part the model developed is the JSON to K converter.


    The first attempt at the JSON to K conversion function was quite buggy.It worked for the simple cases, but failed with more complex ones. To help resolve these bugs, I created a simple unit test script in Q, which the model was able to read, modify, and interpret results from. This was the only code I provided. Having this unittests.q script was critical for the model to recognize when changes broke existing functionality. About 90% of the time, new changes caused previously passing tests to fail (so much for step-by-step thinking).


    It took about 100 iterations of 1) modifying C++ code, 2) compiling, 3) pasting compilation errors, 4) running the unit tests, and 5) showing unit test failures and output differences before the model finally developed a base version that passed all tests. I also provided a few examples from the C API documentation, as the model was rusty on that.

    Automating many of these steps (2, 3, 4, and 5) using tools like Devin or OpenHands, which can run unit or performance tests itself, would have saved a lot of time.

Log in to reply.