3 comments

  • kamranjon 1 day ago
    This is interesting! I once trained a t5 model by removing newlines from Wikipedia text and it worked surprisingly well / at the time the context length was the biggest issue.

    Another, not so easy to solve issue was conversational dialogue type data, which wasn’t super well represented in the training data.

    I’ve always wanted to come back to working on the problem again, because I think it’s very interesting and we will have a bunch of unstructured text as a result of STT models like whisper that do a great job of transcribing/translating but generally don’t format anything.

  • CjHuber 1 day ago
    Took me a minute to realize this is not about Chonkie. I would be interested in how this compares to the other's semantic chunking approach
  • TZubiri 1 day ago
    That example looks terribly useless. Maybe there's an actually useful application you had in mind? I don't know say

    Chonk("Hey I forgot my password, this is Tom from X Company") = ("Hey", "I forgot my password", "this is Tom from X Company")

    Even then it doesn't quite look helpful.