private static bool FindMatchingOpenChar(SnapshotPoint startPoint, char open, char close, out SnapshotSpan pairSpan, IList <IToken> tokens, int offset) { pairSpan = new SnapshotSpan(startPoint, startPoint); try { int startpos = startPoint.Position; if (tokens != null) { int tokenpos = findtokeninList(tokens, startpos - offset); if (tokenpos == -1) { return(false); } IToken token = tokens[tokenpos]; // open/close braces are operators if (!XSharpLexer.IsOperator(token.Type)) { return(false); } int closeCount = 0; for (int i = tokenpos - 1; i >= 0; i--) { token = tokens[i]; if (XSharpLexer.IsOperator(token.Type)) { string text = token.Text; if (text[0] == close) { closeCount++; } if (text[0] == open) { if (closeCount > 0) { closeCount--; } else { pairSpan = new SnapshotSpan(startPoint.Snapshot, token.StartIndex + offset, 1); return(true); } } } } } return(false); } catch (System.Exception ex) { System.Diagnostics.Debug.WriteLine(ex.Message); } return(false); }
private ClassificationSpan ClassifyToken(IToken token, IList <ClassificationSpan> regionTags, ITextSnapshot snapshot) { var tokenType = token.Type; ClassificationSpan result = null; switch (token.Channel) { case XSharpLexer.PRAGMACHANNEL: // #pragma case XSharpLexer.PREPROCESSORCHANNEL: // #define, #ifdef etc result = Token2ClassificationSpan(token, snapshot, xsharpPPType); switch (token.Type) { case XSharpLexer.PP_REGION: case XSharpLexer.PP_IFDEF: case XSharpLexer.PP_IFNDEF: regionTags.Add(Token2ClassificationSpan(token, snapshot, xsharpRegionStart)); break; case XSharpLexer.PP_ENDREGION: case XSharpLexer.PP_ENDIF: regionTags.Add(Token2ClassificationSpan(token, snapshot, xsharpRegionStop)); break; default: break; } break; case XSharpLexer.DEFOUTCHANNEL: // code in an inactive #ifdef result = Token2ClassificationSpan(token, snapshot, xsharpInactiveType); break; case XSharpLexer.XMLDOCCHANNEL: case XSharpLexer.Hidden: if (XSharpLexer.IsComment(token.Type)) { result = Token2ClassificationSpan(token, snapshot, xsharpCommentType); if (token.Type == XSharpLexer.ML_COMMENT && token.Text.IndexOf("\r") >= 0) { regionTags.Add(Token2ClassificationSpan(token, snapshot, xsharpRegionStart)); regionTags.Add(Token2ClassificationSpan(token, snapshot, xsharpRegionStop)); } } break; default: // Normal channel IClassificationType type = null; if (XSharpLexer.IsIdentifier(tokenType)) { type = xsharpIdentifierType; } else if (XSharpLexer.IsConstant(tokenType)) { switch (tokenType) { case XSharpLexer.STRING_CONST: case XSharpLexer.CHAR_CONST: case XSharpLexer.ESCAPED_STRING_CONST: case XSharpLexer.INTERPOLATED_STRING_CONST: type = xsharpStringType; break; case XSharpLexer.FALSE_CONST: case XSharpLexer.TRUE_CONST: type = xsharpKeywordType; break; case XSharpLexer.VO_AND: case XSharpLexer.VO_NOT: case XSharpLexer.VO_OR: case XSharpLexer.VO_XOR: case XSharpLexer.SYMBOL_CONST: case XSharpLexer.NIL: type = xsharpLiteralType; break; default: if ((tokenType >= XSharpLexer.FIRST_NULL) && (tokenType <= XSharpLexer.LAST_NULL)) { type = xsharpKeywordType; break; } else { type = xsharpNumberType; } break; } } else if (XSharpLexer.IsKeyword(tokenType)) { type = xsharpKeywordType; } else if (XSharpLexer.IsOperator(tokenType)) { switch (tokenType) { case XSharpLexer.LPAREN: case XSharpLexer.LCURLY: case XSharpLexer.LBRKT: type = xsharpBraceOpenType; break; case XSharpLexer.RPAREN: case XSharpLexer.RCURLY: case XSharpLexer.RBRKT: type = xsharpBraceCloseType; break; default: type = xsharpOperatorType; break; } } if (type != null) { result = Token2ClassificationSpan(token, snapshot, type); } break; } return(result); }
private String getFirstKeywordInLine(ITextSnapshotLine line, int start, int length) { String keyword = ""; var tokens = getTokensInLine(line.Snapshot, start, length); bool inAttribute = false; // if (tokens.Count > 0) { int index = 0; while (index < tokens.Count) { var token = tokens[index]; // skip whitespace tokens if (token.Type == XSharpLexer.WS) { index++; continue; } keyword = ""; if (XSharpLexer.IsKeyword(token.Type) || (token.Type >= XSharpLexer.PP_FIRST && token.Type <= XSharpLexer.PP_LAST)) { keyword = token.Text.ToUpper(); // it could be modifier... if (keywordIsModifier(token.Type)) { index++; continue; } else { // keyword found break; } } else if (XSharpLexer.IsComment(token.Type)) { keyword = token.Text; if (keyword.Length >= 2) { keyword = keyword.Substring(0, 2); } break; } else if (XSharpLexer.IsOperator(token.Type)) { keyword = token.Text; if (token.Type == XSharpLexer.LBRKT) { inAttribute = true; index++; continue; } else if (token.Type == XSharpLexer.RBRKT) { inAttribute = false; index++; continue; } } else { if (inAttribute) { // Skip All Content in index++; continue; } } break; } } return(keyword); }
/// <summary> /// Parse the current Snapshot, and build the Tag List /// </summary> private void Colorize() { var snapshot = this.Buffer.CurrentSnapshot; Snapshot = snapshot; ITokenStream TokenStream = null; // parse for positional keywords that change the colors // and get a reference to the tokenstream string path = String.Empty; if (txtdocfactory != null) { ITextDocument doc = null; if (txtdocfactory.TryGetTextDocument(this.Buffer, out doc)) { path = doc.FilePath; } } // Parse the source and get the (Lexer) Tokenstream to locate comments, keywords and other tokens. // The parser will identify (positional) keywords that are used as identifier xsTagger.Parse(snapshot, out TokenStream, path); if (TokenStream != null) { tags.Clear(); for (var iToken = 0; iToken < TokenStream.Size; iToken++) { var token = TokenStream.Get(iToken); var tokenType = token.Type; TextSpan tokenSpan = new TextSpan(token.StartIndex, token.StopIndex - token.StartIndex + 1); if (XSharpLexer.IsKeyword(tokenType)) { tags.Add(tokenSpan.ToTagSpan(snapshot, xsharpKeywordType)); } else if (XSharpLexer.IsConstant(tokenType)) { tags.Add(tokenSpan.ToTagSpan(snapshot, xsharpConstantType)); } else if (XSharpLexer.IsOperator(tokenType)) { switch (tokenType) { case LanguageService.CodeAnalysis.XSharp.SyntaxParser.XSharpLexer.LPAREN: case LanguageService.CodeAnalysis.XSharp.SyntaxParser.XSharpLexer.LCURLY: case LanguageService.CodeAnalysis.XSharp.SyntaxParser.XSharpLexer.LBRKT: tags.Add(tokenSpan.ToTagSpan(snapshot, xsharpBraceOpenType)); break; case LanguageService.CodeAnalysis.XSharp.SyntaxParser.XSharpLexer.RPAREN: case LanguageService.CodeAnalysis.XSharp.SyntaxParser.XSharpLexer.RCURLY: case LanguageService.CodeAnalysis.XSharp.SyntaxParser.XSharpLexer.RBRKT: tags.Add(tokenSpan.ToTagSpan(snapshot, xsharpBraceCloseType)); break; default: tags.Add(tokenSpan.ToTagSpan(snapshot, xsharpOperatorType)); break; } } else if (XSharpLexer.IsIdentifier(tokenType)) { tags.Add(tokenSpan.ToTagSpan(snapshot, xsharpIdentifierType)); } else if (XSharpLexer.IsComment(tokenType)) { tags.Add(tokenSpan.ToTagSpan(snapshot, xsharpCommentType)); } } foreach (var tag in xsTagger.Tags) { tags.Add(tag); } if (TagsChanged != null) { TagsChanged(this, new SnapshotSpanEventArgs(new SnapshotSpan(Buffer.CurrentSnapshot, 0, this.Buffer.CurrentSnapshot.Length))); } } }
internal static IList <XSharpToken> GetTokensUnderCursor(XSharpSearchLocation location, out CompletionState state) { var tokens = GetTokenList(location, out state, true, true).Where((t) => t.Channel == XSharpLexer.DefaultTokenChannel).ToList(); // Find "current" token if (tokens.Count > 0) { var tokenUnderCursor = tokens.Count - 1; for (int i = tokens.Count - 1; i >= 0; i--) { var token = tokens[i]; if (token.StartIndex <= location.Position && token.StopIndex >= location.Position) { tokenUnderCursor = i; break; } } var selectedToken = tokens[tokenUnderCursor]; var nextToken = tokenUnderCursor < tokens.Count - 1 ? tokens[tokenUnderCursor + 1] : null; bool done = false; switch (selectedToken.Type) { case XSharpLexer.NAMEOF: case XSharpLexer.TYPEOF: case XSharpLexer.SIZEOF: case XSharpLexer.SELF: case XSharpLexer.SUPER: if (nextToken != null && nextToken.Type == XSharpLexer.LPAREN) { return(tokens); } break; default: if (XSharpLexer.IsKeyword(selectedToken.Type)) { tokens.Clear(); tokens.Add(selectedToken); return(tokens); } break; } // When we are not on a Keyword then we need to walk back in the tokenlist to see // if we can evaluate the expression // This could be: // System.String.Compare() // static method cal or method call // SomeVar:MethodCall() // method call // Left(...) // function call // SomeId // local, global etc // SomeType.Id // Static property or normal property // SomeVar:Id // Instance field or property // If the token list contains with a RCURLY, RBRKT or RPAREN // Then strip everything until the matching LCURLY, LBRKT or LPAREN is found var list = new XSharpTokenList(tokens); tokens = new List <XSharpToken>(); while (!list.Eoi()) { var token = list.ConsumeAndGet(); switch (token.Type) { case XSharpLexer.LCURLY: tokens.Add(token); if (list.Contains(XSharpLexer.RCURLY)) { // this may return false when the RCURLY belongs to another LCURLY if (list.ConsumeUntilEndToken(XSharpLexer.RCURLY, out var endToken)) { tokens.Add(endToken); } } break; case XSharpLexer.LPAREN: tokens.Add(token); if (list.Contains(XSharpLexer.RPAREN)) { // this may return false when the RPAREN belongs to another LPAREN if (list.ConsumeUntilEndToken(XSharpLexer.RPAREN, out var endToken)) { tokens.Add(endToken); } } break; case XSharpLexer.LBRKT: tokens.Add(token); if (list.Contains(XSharpLexer.RBRKT)) { // this may return false when the RBRKT belongs to another LBRKT if (list.ConsumeUntilEndToken(XSharpLexer.RBRKT, out var endToken)) { tokens.Add(endToken); } } break; case XSharpLexer.DOT: case XSharpLexer.COLON: case XSharpLexer.SELF: case XSharpLexer.SUPER: tokens.Add(token); break; default: tokens.Add(token); if (XSharpLexer.IsOperator(token.Type)) { done = true; } if (token.Type == XSharpLexer.VAR) { done = true; } else if (XSharpLexer.IsKeyword(token.Type) && !XSharpLexer.IsPositionalKeyword(token.Type) ) { done = true; } break; } } // now result has the list of tokens starting with the cursor // we only keep: // ID, DOT, COLON, LPAREN, LBRKT, RBRKT // when we detect another token we truncate the list there if (tokens.Count > 0) { var lastType = tokens[0].Type; for (int i = tokenUnderCursor + 1; i < tokens.Count && !done; i++) { var token = tokens[i]; switch (token.Type) { case XSharpLexer.ID: case XSharpLexer.DOT: case XSharpLexer.COLON: case XSharpLexer.LPAREN: case XSharpLexer.LCURLY: case XSharpLexer.LBRKT: lastType = tokens[i].Type; break; case XSharpLexer.LT: int gtPos = findTokenInList(tokens, i + 1, XSharpLexer.GT); if (lastType == XSharpLexer.ID && gtPos > 0) { gtPos += 1; tokens.RemoveRange(gtPos, tokens.Count - gtPos); done = true; break; } else { goto default; } default: tokens.RemoveRange(i, tokens.Count - i); done = true; break; } } } } // check for extra lparen, lcurly at the end int count = tokens.Count; if (count > 2 && count < tokens.Count - 2) { if (tokens[count - 2].Type == XSharpLexer.LPAREN) { switch (tokens[count - 1].Type) { case XSharpLexer.LPAREN: case XSharpLexer.LCURLY: tokens.RemoveAt(count - 1); break; } } } return(tokens); }
internal static List <XSharpToken> GetTokenList(XSharpSearchLocation location, out CompletionState state, bool includeKeywords = false, bool underCursor = false) { location = AdjustStartLineNumber(location); var line = getLineFromBuffer(location); // state = CompletionState.General; if (line.Count == 0) { return(line); } // if the token appears after comma or paren then strip the tokens // now look forward and find the first token that is on or after the triggerpoint var result = new List <XSharpToken>(); var last = XSharpLexer.Eof; bool allowdot = location.Project?.ParseOptions?.AllowDotForInstanceMembers ?? false; var cursorPos = location.Position; var done = false; var list = new XSharpTokenList(line); while (!done && !list.Eoi()) { var token = list.ConsumeAndGet(); int openToken = 0; XSharpToken closeToken = null; bool isHit = token.StartIndex <= cursorPos && token.StopIndex >= cursorPos && underCursor; bool isNotLast = token.StopIndex < location.Position - 1; if (token.StartIndex > cursorPos) { // after the cursor we only include the open tokens // so we can see if the id under the cursor is a method, constructor etc switch (token.Type) { case XSharpLexer.LPAREN: case XSharpLexer.LCURLY: case XSharpLexer.LBRKT: break; case XSharpLexer.LT: // if this is a generic type // then add the complete bool first = true; bool endoflist = false; while (!endoflist) { endoflist = true; if (list.La1 == XSharpLexer.ID || XSharpLexer.IsType(list.La1)) { if (list.La2 == XSharpLexer.GT || list.La2 == XSharpLexer.COMMA) { if (first) { result.Add(token); first = false; } result.Add(list.ConsumeAndGet()); // la1 result.Add(list.ConsumeAndGet()); // la2 endoflist = false; } } } continue; default: done = true; break; } if (done) { continue; } } switch (token.Type) { // after these tokens we "restart" the list case XSharpLexer.EOS: if (token.Position < cursorPos && token != line.Last()) { // an EOS inside a line before the cursor // so there are 2 or more statements on the same line // clear the first statement result.Clear(); state = CompletionState.General; } else { // Exit loop, ignore the rest of the statements done = true; } continue; case XSharpLexer.WS: case XSharpLexer.Eof: continue; case XSharpLexer.TO: case XSharpLexer.UPTO: case XSharpLexer.DOWNTO: case XSharpLexer.IN: if (!isHit) { result.Clear(); if (isNotLast) // there has to be a space after the token { state = CompletionState.General; } else { state = CompletionState.None; } } else { result.Add(token); } break; case XSharpLexer.LCURLY: state = CompletionState.Constructors; result.Add(token); break; case XSharpLexer.LPAREN: state = CompletionState.StaticMembers | CompletionState.InstanceMembers; result.Add(token); break; case XSharpLexer.LBRKT: state = CompletionState.Brackets; result.Add(token); break; case XSharpLexer.ID: case XSharpLexer.NAMEOF: case XSharpLexer.TYPEOF: case XSharpLexer.SIZEOF: result.Add(token); break; case XSharpLexer.RCURLY: case XSharpLexer.RPAREN: case XSharpLexer.RBRKT: bool add = true; if (result.Count > 0 && token == list.LastOrDefault) { var lasttoken = result.Last(); if (lasttoken.Type == XSharpLexer.COLON || lasttoken.Type == XSharpLexer.DOT) { // closing char after colon or dot add = false; done = true; } } if (add) { result.Add(token); // delete everything between parens, curly braces and brackets closing token before cursor pos if (token.Position < location.Position) { closeToken = token; if (token.Type == XSharpLexer.RCURLY) { openToken = XSharpLexer.LCURLY; } else if (token.Type == XSharpLexer.RPAREN) { openToken = XSharpLexer.LPAREN; } else if (token.Type == XSharpLexer.RBRKT) { openToken = XSharpLexer.LBRKT; } } } break; case XSharpLexer.STATIC: // These tokens are all before a namespace of a (namespace dot) type if (isNotLast) // there has to be a space after the token { state = CompletionState.General; } else { state = CompletionState.None; } break; case XSharpLexer.USING: if (isNotLast) // there has to be a space after the token { if (list.Expect(XSharpLexer.STATIC)) { state = CompletionState.Namespaces | CompletionState.Types; result.Clear(); } else if (list.La1 == XSharpLexer.ID) { state = CompletionState.Namespaces; result.Clear(); } } break; case XSharpLexer.MEMBER: if (isNotLast) // there has to be a space after the token { state = CompletionState.StaticMembers; } else { state = CompletionState.None; } break; case XSharpLexer.AS: case XSharpLexer.IS: case XSharpLexer.REF: case XSharpLexer.INHERIT: if (!isHit) { result.Clear(); } else { result.Add(token); } if (isNotLast) // there has to be a space after the token { state = CompletionState.Namespaces | CompletionState.Types; } else { state = CompletionState.None; } break; case XSharpLexer.IMPLEMENTS: result.Clear(); if (isNotLast) { state = CompletionState.Namespaces | CompletionState.Interfaces; } else { state = CompletionState.None; } break; case XSharpLexer.COLON: state = CompletionState.InstanceMembers; result.Add(token); break; case XSharpLexer.DOT: if (!state.HasFlag(CompletionState.Namespaces)) { state = CompletionState.Namespaces | CompletionState.Types | CompletionState.StaticMembers; if (allowdot) { state |= CompletionState.InstanceMembers; } } result.Add(token); break; case XSharpLexer.QMARK: if (result.Count != 0) // when at start of line then do not add. Otherwise it might be a Nullable type or conditional access expression { result.Add(token); } break; case XSharpLexer.QQMARK: if (result.Count != 0) // when at start of line then do not add. Otherwise it might be a binary expression { result.Add(token); } break; case XSharpLexer.BACKSLASH: case XSharpLexer.BACKBACKSLASH: // this should only be seen at start of line // clear the list to be sure result.Clear(); break; case XSharpLexer.NAMESPACE: state = CompletionState.Namespaces; break; case XSharpLexer.COMMA: case XSharpLexer.ASSIGN_OP: case XSharpLexer.COLONCOLON: case XSharpLexer.SELF: case XSharpLexer.SUPER: state = CompletionState.General; result.Add(token); break; default: state = CompletionState.General; if (XSharpLexer.IsOperator(token.Type)) { result.Add(token); } else if (XSharpLexer.IsType(token.Type)) { result.Add(token); } else if (XSharpLexer.IsConstant(token.Type)) { result.Add(token); } else if (XSharpLexer.IsKeyword(token.Type) && includeKeywords) // For code completion we want to include keywords { token.Text = XSettings.FormatKeyword(token.Text); result.Add(token); } break; } last = token.Type; // remove everything between parens, curly braces or brackets when the closing token is before the cursor if (openToken != 0 && closeToken != null) { var iLast = result.Count - 1; int count = 0; while (iLast >= 0 && result[iLast] != closeToken) { iLast--; } int closeType = closeToken.Type; while (iLast >= 0) { var type = result[iLast].Type; if (type == closeType) { count += 1; } else if (type == openToken) { count -= 1; if (count == 0) { if (iLast < result.Count - 1) { result.RemoveRange(iLast + 1, result.Count - iLast - 2); } break; } } iLast -= 1; } } } // when the list ends with a comma, drop the ending comma. Why ? if (result.Count > 0) { var end = result.Last(); if (end.Type == XSharpLexer.COMMA) { result.RemoveAt(result.Count - 1); } } return(result); }