You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi. I'm trying to implement a parser for a mini-language in the way that rust-analyzer's syntax crate does. To be more specific, the language is a kind of subset of C. Thus there are no trailing commas. When I want to generate code for a list like (Node (',' Node)*), I find there are no ways to convert a string to a token that presents a comma.
Am I missing any APIs? If not, is it possible to add two methods to convert strings to nodes or tokens?
I did a little bit of simple experimentation and this may work:
implGrammar{
...
/// Returns a node with given namepubfnget_node(&self,name:&str) -> Option<Node>{self.nodes.iter().position(|n| n.name == name).map(Node)}/// Returns a token with given namepubfnget_token(&self,name:&str) -> Option<Token>{self.tokens.iter().position(|t| t.name == name).map(Token)}}
And an example working on ungrammar.ungram:
let n = grammar.get_node("Rule");println!("Get n {:?}", n);let t = grammar.get_token(":");println!("Get t {:?}", t);
Get n Some(Node(2))
Get t Some(Token(8))
I'm not sure whether this is appropriate. If it is appropriate, I can submit a PR.
The text was updated successfully, but these errors were encountered:
Hi. I'm trying to implement a parser for a mini-language in the way that rust-analyzer's syntax crate does. To be more specific, the language is a kind of subset of C. Thus there are no trailing commas. When I want to generate code for a list like
(Node (',' Node)*)
, I find there are no ways to convert a string to a token that presents a comma.Am I missing any APIs? If not, is it possible to add two methods to convert strings to nodes or tokens?
I did a little bit of simple experimentation and this may work:
And an example working on
ungrammar.ungram
:I'm not sure whether this is appropriate. If it is appropriate, I can submit a PR.
The text was updated successfully, but these errors were encountered: