internal void NextToken() { // whitespace: // SPC HTAB VTAB CR LF // char: // '-!$%&()*,./:;?@[]_{}~+= ; ???? // digit: // 0 1 2 3 4 5 6 7 8 9 // word: // add all ... // string: // ' char | whitespace | digit | alpha ' this.SkipWhiteSpaces(); var token = new Token(); Token = token; token.Line = Line; token.Column = Column; switch (CurrentCharType) { case CharType.EOF: token.TokenType = TokenType.EOF; token.Value = new String(CurrentChar, 1); return; case CharType.WSP: token.TokenType = TokenType.WSP; token.Value = new String(CurrentChar, 1); NextChar(); return; case CharType.CtrlChar: token.TokenType = TokenType.CtrlChar; token.Value = new String(CurrentChar, 1); NextChar(); return; case CharType.Char: var startIndex = SrcIndex; if (CurrentChar == Chars.APOS && ReadString()) { token.TokenType = TokenType.String; token.Value = Src.Substring(startIndex, SrcIndex - startIndex - 1).Replace("''", "'"); Column += token.Value.Length + 1; } else { token.TokenType = TokenType.Char; token.Value = new String(CurrentChar, 1); } NextChar(); return; case CharType.Digit: ScanDigits(token); return; case CharType.Sharp: case CharType.Alpha: ScanWord(token); return; default: throw new ApplicationException("Unknown CharType: " + CurrentCharType.ToString()); } }
internal void ScanWord(Token token) { var startIndex = SrcIndex - 1; var length = 0; while (CurrentCharType == CharType.Alpha || CurrentCharType == CharType.Sharp) { NextChar(); length++; } token.TokenType = TokenType.Word; token.Value = Src.Substring(startIndex, length); }
internal void ScanDigits(Token token) { var startIndex = SrcIndex - 1; var length = 0; token.TokenType = TokenType.Digits; while (CurrentCharType == CharType.Digit) { NextChar(); length++; } token.Value = Src.Substring(startIndex, length); }
public override void Process(TagHelperContext context, TagHelperOutput output) { if (context == null) { throw new ArgumentNullException(nameof(context)); } if (output == null) { throw new ArgumentNullException(nameof(output)); } output.CopyHtmlAttribute(SrcAttributeName, context); ProcessUrlAttribute(SrcAttributeName, output); var bareUrl = Src.Substring(0, Src.LastIndexOf('.')); if (AppendVersion) { EnsureFileVersionProvider(); } foreach (var format in SourceFormats) { var formatUrl = $"{bareUrl}.{format}"; var finalUrl = AppendVersion ? FileVersionProvider.AddFileVersionToPath(ViewContext.HttpContext.Request.PathBase, formatUrl) : formatUrl; output.Content.AppendHtml($@"<source type=""image/{format}"" srcset=""{finalUrl}"" />"); } var url = AppendVersion ? FileVersionProvider.AddFileVersionToPath(ViewContext.HttpContext.Request.PathBase, Src) : Src; output.Content.AppendHtml(!Eager ? $@"<img src=""{url}"" alt=""{Alt}"" width=""{Width}"" height=""{Height}"">" : $@"<img src=""{url}"" alt=""{Alt}"" width=""{Width}"" height=""{Height}"" loading=""lazy"">"); output.Attributes.Clear(); }
public bool advance(Block preblock, Block curblock) { tok = TokenType.TXT; if (Src.Length == 0 && curblock.isLineHeadCmt == 1) { curblock.elem = preblock.elem; isNextLine = true; } int c = reader.read(); if (c == -1) { tok = TokenType.EOS; return(false); } switch (c) { case ' ': case 0x3000: case '\t': default: if (curblock.isLineHeadCmt == 0) //0: 行頭がブロックコメントの内部ではない { if (Char.IsDigit((char)c)) { reader.unread(); lexDigit(); } else if (Util.isIdentifierPart((char)c)) { reader.unread(); lexKeyWord(); } else if (ruleFirstKeys.Contains((char)c)) { char fc = (char)c; foreach (var item in ruleDic) { if (item.Key[0] == fc) { var len = item.Value.start.Length; if (Offset - 1 + len <= Src.Length) { var text = Src.Substring(Offset - 1, len); if (text.ToString() == item.Key) { reader.unread(); lexSymbol(curblock); break; } } } } } else { tok = TokenType.TXT; } } else //1: 行頭がブロックコメントの内部 { if (Offset - 1 == 0) { while (c != -1) { if (ruleEndKeys.Contains((char)c)) { StringBuilder buf = new StringBuilder(); while (c != -1) { buf.Append((char)c); if (preblock.elem.end == buf.ToString()) { if (multiruleDic.ContainsKey(buf.ToString())) { var Eenelem = multiruleDic[buf.ToString()]; Eenelem.startIndex = 0; Eenelem.len = (Offset - Eenelem.startIndex); tok = Eenelem.token; resultRule = Eenelem; curblock.elem = Eenelem; isNextLine = false; goto Finish; } } c = reader.read(); } } c = reader.read(); } Finish: if (c == -1) { var enelem = ruleDic[preblock.elem.start]; reader.setoffset(Src.Length); enelem.startIndex = 0; enelem.len = Src.Length; tok = enelem.token; resultRule = enelem; curblock.elem = preblock.elem; isNextLine = true; } } else { if (Char.IsDigit((char)c)) { reader.unread(); lexDigit(); } else if (Util.isIdentifierPart((char)c)) { reader.unread(); lexKeyWord(); } else if (ruleFirstKeys.Contains((char)c)) { char fc = (char)c; foreach (var item in ruleDic) { if (item.Key[0] == fc) { var len = item.Value.start.Length; if (Offset - 1 + len <= Src.Length) { var text = Src.Substring(Offset - 1, len); if (text.ToString() == item.Key) { reader.unread(); lexSymbol(curblock); break; } } } } } else { tok = TokenType.TXT; } } } break; } return(true); }
public bool advance(Block preblock, Block curblock) { tok = TokenType.TXT; if (Src.Length == 0 && curblock.isLineHeadCmt == 1) { curblock.mRule = preblock.mRule; isNextLine = true; } if (Src.Length == 0 && curblock.isLineHeadPart == 1) { curblock.PartID = preblock.PartID; scisNextLine = true; } int c = reader.read(); if (c == -1) { tok = TokenType.EOS; return(false); } switch (c) { case ' ': case 0x3000: case '\t': break; default: //TODO test if (curblock.isLineHeadCmt == 0) { if (curblock.isLineHeadPart == 0) { if (paruleStartKeys.Contains((char)c)) { char fc = (char)c; foreach (var item in partRuleDic) { if (item.Key[0] == fc) { var len = item.Value.start.Length; if (Offset - 1 + len <= Src.Length) { var text = Src.Substring(Offset - 1, len); if (text.ToString() == item.Key) { tok = TokenType.PartitionStart; if (curblock != null) { curblock.PartID = item.Value.id; scisNextLine = true; } reader.setoffset(Offset + text.Length); return(true); } } } } } } else { if (Offset - 1 == 0) { while (c != -1) { if (paruleStartKeys.Contains((char)c)) { StringBuilder buf = new StringBuilder(); while (c != -1) { buf.Append((char)c); if (partRuleEndDic.ContainsKey(buf.ToString())) { if (preblock.PartID == partRuleEndDic[buf.ToString()].id) { var Eenelem = partRuleEndDic[buf.ToString()]; tok = TokenType.PartitionEnd; curblock.PartID = Eenelem.id; scisNextLine = false; reader.setoffset(Offset + buf.ToString().Length); return(true); } } c = reader.read(); } } c = reader.read(); } //Finish: if (c == -1) { tok = TokenType.Partition; curblock.PartID = preblock.PartID; scisNextLine = true; } reader.setoffset(0); c = reader.read(); } else if (paruleStartKeys.Contains((char)c)) { char fc = (char)c; foreach (var item in partRuleDic) { if (item.Key[0] == fc) { var len = item.Value.start.Length; if (Offset - 1 + len <= Src.Length) { var text = Src.Substring(Offset - 1, len); if (text.ToString() == item.Key) { tok = TokenType.PartitionStart; if (curblock != null) { curblock.PartID = item.Key; scisNextLine = true; } reader.setoffset(Offset + text.Length); //break; return(true); } } } } } } } if (curblock.isLineHeadCmt == 0) //0: 行頭がブロックコメントの内部ではない { if (Char.IsDigit((char)c)) { reader.unread(); lexDigit(); break; } else if (Util.isIdentifierPart((char)c)) { reader.unread(); lexKeyWord(); break; } else { reader.unread(); lexSymbol(curblock); break; } } else //1: 行頭がブロックコメントの内部 { if (Offset - 1 == 0) { reader.unread(); StringBuilder buf = new StringBuilder(); while (true) { c = reader.read(); if (c == -1) { break; } buf.Append((char)c); string s = buf.ToString(); string end = preblock.mRule.end; if (s.EndsWith(end)) { if (multiRuleEndDic.ContainsKey(end)) { var rule = multiRuleEndDic[end]; if (rule.Detected(end, reader)) { curblock.mRule = rule; //isNextLine = false; tok = rule.token; int len = rule.getLen(end, reader); reader.setoffset(len); OffsetLenAttr = new Tuple <int, int, Attribute>(0, len, rule.attr); //break; return(true); } } } } if (c == -1 && preblock.mRule != null && multiRuleDic.ContainsKey(preblock.mRule.start)) { var enelem = multiRuleDic[preblock.mRule.start]; curblock.mRule = preblock.mRule; tok = TokenType.MultiLineStart; OffsetLenAttr = new Tuple <int, int, Attribute>(0, Src.Length, enelem.attr); reader.setoffset(Src.Length); } break; } else { if (Char.IsDigit((char)c)) { reader.unread(); lexDigit(); } else if (Util.isIdentifierPart((char)c)) { reader.unread(); lexKeyWord(); } else { reader.unread(); lexSymbol(curblock); } } } break; } return(true); }