Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test data leakage & minor questions #14

Open
Hanzhang-lang opened this issue Sep 13, 2024 · 0 comments
Open

Test data leakage & minor questions #14

Hanzhang-lang opened this issue Sep 13, 2024 · 0 comments

Comments

@Hanzhang-lang
Copy link

Thanks for your work. Your research is immensely valuable and has been of significant assistance to me. However, I have a few questions regarding your article that I hope you could clarify.

  1. In the process of reasoning, you mentioned the use of Large Language Models (LLMs). I am curious about the potential impact of test data leakage in the LLM training data. I observed that some common-sense questions might be answerable directly through the language model without the need for a knowledge graph model. If so, do you consider this issue in your work?
  2. I understand that the reasoning and planning processes can be decoupled. Have you explored the possibility of employing a zero-shot learning approach without fine-tuning? I am interested to know if such an approach has been tested and what the outcomes were, if any.

Your insights on these matters would be greatly appreciated.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant