You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for your work. Your research is immensely valuable and has been of significant assistance to me. However, I have a few questions regarding your article that I hope you could clarify.
In the process of reasoning, you mentioned the use of Large Language Models (LLMs). I am curious about the potential impact of test data leakage in the LLM training data. I observed that some common-sense questions might be answerable directly through the language model without the need for a knowledge graph model. If so, do you consider this issue in your work?
I understand that the reasoning and planning processes can be decoupled. Have you explored the possibility of employing a zero-shot learning approach without fine-tuning? I am interested to know if such an approach has been tested and what the outcomes were, if any.
Your insights on these matters would be greatly appreciated.
The text was updated successfully, but these errors were encountered:
Thanks for your work. Your research is immensely valuable and has been of significant assistance to me. However, I have a few questions regarding your article that I hope you could clarify.
Your insights on these matters would be greatly appreciated.
The text was updated successfully, but these errors were encountered: