-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: Errors in local search #451
Comments
Sorry, I haven't solved it. |
its something in the commuity extract scripts or llm parser scripts. just cant nail down what |
I am using llama.cpp to server embedding api, it is more stable, you can try that. |
same. I use the py script app.py .Maybe it's about the int and str variable. Error embedding chunk {'OpenAIEmbedding': "Error code: 422 - {'detail': [{'type': 'string_type', 'loc': ['body', 'input', 0], 'msg': 'Input should be a valid string', 'input': 3923, 'url': 'https://errors.pydantic.dev/2.7/v/string_type'}, {'type': 'string_type', 'loc': ['body', 'input', 1], 'msg': 'Input should be a valid string', 'input': 527, 'url': 'https://errors.pydantic.dev/2.7/v/string_type'}, {'type': 'string_type', 'loc': ['body', 'input', 2], 'msg': 'Input should be a valid string', 'input': 279, 'url': 'https://errors.pydantic.dev/2.7/v/string_type'} |
Issue is that Solution is to add one line to package's ...
def embed(self, text: str, **kwargs: Any) -> list[float]:
"""
Embed text using OpenAI Embedding's sync function.
For text longer than max_tokens, chunk texts into max_tokens, embed each chunk, then combine using weighted average.
Please refer to: https://github.com/openai/openai-cookbook/blob/main/examples/Embedding_long_inputs.ipynb
"""
token_chunks = chunk_text(
text=text, token_encoder=self.token_encoder, max_tokens=self.max_tokens
)
chunk_embeddings = []
chunk_lens = []
for chunk in token_chunks:
# decode chunk from token ids to text (added line after row 83)
chunk = self.token_encoder.decode(chunk)
try:
embedding, chunk_len = self._embed_with_retry(chunk, **kwargs)
chunk_embeddings.append(embedding)
chunk_lens.append(chunk_len)
# TODO: catch a more specific exception
except Exception as e: # noqa BLE001
self._reporter.error(
message="Error embedding chunk",
details={self.__class__.__name__: str(e)},
)
continue
chunk_embeddings = np.average(chunk_embeddings, axis=0, weights=chunk_lens)
chunk_embeddings = chunk_embeddings / np.linalg.norm(chunk_embeddings)
return chunk_embeddings.tolist()
... |
I can use local search by this way, thank you so much. |
Could you show how you modified the "def _embed_with_retry" function in the embedding.py? I got the embedding to work but later got an error that says "Error: Query vector size 768 does not match index column size 3072". 768 is the length of my embedding vector for the provided query. Not sure what 3072 means. I use nomic-embed-text from Ollama. |
I had a similar looking problem when i tried to use different models for Index ( BTW my
|
I can use local search by this way too, thank you so much. |
I fixed this as well. you can find my repo to do local indexing and search here. |
Apologies for going off-topic, but seeing as you've successfully attempted global search, did you have to make any hotfixes for that? Or was it all smooth sailing? I ran into this JSON issue, which has this fix and this fix. Perhaps there's no answer, but I'm a bit curious as to why you might not have run into the issue, unless that simply isn't discussed in the blog. |
No I did not get any of these issues. I also used Mistral instead of Llama, which was suggested by an Youtuber for its longer context window than llama. |
Interesting, Mistral did not fix my problem, but I'll try again with your repo
On Jul 19, 2024 10:18 AM, Karthik Rajan ***@***.***> wrote:
@karthik-codex<https://github.com/karthik-codex>
I fixed this as well. you can find my repo to do local indexing and search here. ***@***.***/microsofts-graphrag-autogen-ollama-chainlit-fully-local-free-multi-agent-rag-superbot-61ad3759f06f https://github.com/karthik-codex/autogen_graphRAG
Apologies for going off-topic, but seeing as you've successfully attempted global search, did you have to make any hotfixes for that? Or was it all smooth sailing?
I ran into this JSON issue<#575>, which has this<#609> fix and this<https://github.com/microsoft/graphrag/pull/473/files> fix.
Perhaps there's no answer, but I'm a bit curious as to why you might not have run into the issue, unless that simply isn't discussed in the blog.
No I did not get any of these issues. I also used Mistral instead of Llama, which was suggested by an Youtuber for its longer context window than llama.
—
Reply to this email directly, view it on GitHub<#451 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/BJSRH5NNTOSY44TCPUDTNZLZNEN23AVCNFSM6AAAAABKR5UQG6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEMZZGI4DMNRUGU>.
You are receiving this because you commented.Message ID: ***@***.***>
|
this is working , but it is giving completely out of context answers |
Consolidating alternate model issues here: #657 |
cool, it solved the problem! |
Hey, are you getting relevant answers? |
I fix it and create a PR 568. Hope it will be merged soon. |
Describe the bug
I successfully ran the global search, but I encountered an error when running the local search.
Error embedding chunk {'OpenAIEmbedding': 'Error code: 400 - {'error': "'input' field must be a string or an array of strings"}'}
Traceback (most recent call last):
File "C:\Users\cpdft.conda\envs\myconda\lib\runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Users\cpdft.conda\envs\myconda\lib\runpy.py", line 86, in run_code
exec(code, run_globals)
File "C:\Users\cpdft.conda\envs\myconda\lib\site-packages\graphrag\query_main.py", line 75, in
run_local_search(
File "C:\Users\cpdft.conda\envs\myconda\lib\site-packages\graphrag\query\cli.py", line 154, in run_local_search
result = search_engine.search(query=query)
File "C:\Users\cpdft.conda\envs\myconda\lib\site-packages\graphrag\query\structured_search\local_search\search.py", line 118, in search
context_text, context_records = self.context_builder.build_context(
File "C:\Users\cpdft.conda\envs\myconda\lib\site-packages\graphrag\query\structured_search\local_search\mixed_context.py", line 139, in build_context
selected_entities = map_query_to_entities(
File "C:\Users\cpdft.conda\envs\myconda\lib\site-packages\graphrag\query\context_builder\entity_extraction.py", line 55, in map_query_to_entities
search_results = text_embedding_vectorstore.similarity_search_by_text(
File "C:\Users\cpdft.conda\envs\myconda\lib\site-packages\graphrag\vector_stores\lancedb.py", line 118, in similarity_search_by_text
query_embedding = text_embedder(text)
File "C:\Users\cpdft.conda\envs\myconda\lib\site-packages\graphrag\query\context_builder\entity_extraction.py", line 57, in
text_embedder=lambda t: text_embedder.embed(t),
File "C:\Users\cpdft.conda\envs\myconda\lib\site-packages\graphrag\query\llm\oai\embedding.py", line 96, in embed
chunk_embeddings = np.average(chunk_embeddings, axis=0, weights=chunk_lens)
File "C:\Users\cpdft.conda\envs\myconda\lib\site-packages\numpy\lib\function_base.py", line 550, in average
raise ZeroDivisionError(
ZeroDivisionError: Weights sum to zero, can't be normalized
Steps to reproduce
No response
Expected Behavior
No response
GraphRAG Config Used
llm:
api_key: ollama
type: openai_chat # or azure_openai_chat
model: gemma2
model_supports_json: true # recommended if this is available for your model.
api_base: http://localhost:11434/v1
embeddings:
llm:
api_key: lm-studio
type: openai_embedding # or azure_openai_embedding
model: nomic-ai\nomic-embed-text-v1.5-GGUF\nomic-embed-text-v1.5.Q4_K_M.gguf
api_base: http://localhost:1234/v1
Logs and screenshots
No response
Additional Information
The text was updated successfully, but these errors were encountered: