使用Scala基于词法单元的解析器定制EBNF范式文法解析

前言

近期在做Oracle迁移到Spark平台的项目上遇到了一些平台公式翻译为SparkSQL(on Hive)的需求,而Spark采用亲妈语言Scala进行开发。分析过大概需求过后,拟使用编译原理中的EBNF范式模式,进行基于词法的文法解析。于是拟采用传统的正则词法解析EBNF文法解析的套路来实现,直到发现了StandardTokenParsers这个Scala基于词法单元的解析器类。

平台公式及翻译后的SparkSQL

平台公式的样子如下所示:

1
if (XX1_m001[D003]="邢おb7肮α䵵薇" || XX1_m001[H003]<"2") && XX1_m001[D005]!="wed" then XX1_m001[H022,COUNT]

这里面字段值”邢おb7肮α䵵薇”为这个的目的是为了测试各种字符集是否都能匹配满足。
那么对应的SparkSQL应该是这个样子的,由于是使用的Hive on Spark,因而长得跟Oracle的SQL语句差不多:

1
SELECT COUNT(H022) FROM XX1_m001 WHERE (XX1_m001.D003='邢おb7肮α䵵薇' OR  XX1_m001.H003<'2')  AND  XX1_m001.D005<'wed'

总体而言比较简单,因为我只是想在这里做一个Demo。

平台公式的EBNF范式及词法解析设计

1
2
3
4
expr-condition ::= tableName "[" valueName "]" comparator Condition
expr-front ::= expr-condition (("&&"|"||")expr-front)*
expr-back ::= tableName "[" valueName "," operator "]"
expr ::= "if" expr-front "then" expr-back

其中词法定义如下

1
2
3
4
operator => [SUM,COUNT]
tableName,valueName =>ident #ident为关键字
comparator => ["=",">=","<=",">","<","!="]
Condition => stringLit #stringLit为字符串常量

使用Scala基于词法单元的解析器解析上述EBNF文法

Scala基于词法单元的解析器是需要继承StandardTokenParsers这个类的,该类提供了很方便的解析函数,以及词法集合。
我们可以通过使用lexical.delimiters列表来存放在文法翻译器执行过程中遇到的分隔符,使用lexical.reserved列表来存放执行过程中的关键字。
比如,我们参照平台公式,看到"=",">=","<=",">","<","!=","&&","||","[","]",",","(",")"这些都是分隔符,其实我们也可以把"=",">=","<=",">","<","!=","&&","||"当做是关键字,但是我习惯上将带有英文字母的单词作为关键字处理。因而,这里的关键字集合便是"if","then","SUM","COUNT"这些。
表现在代码中是酱紫的:

1
2
lexical.delimiters += ("=",">=","<=",">","<","!=","&&","||","[","]",",","(",")")
lexical.reserved += ("if","then","SUM","COUNT")

是不是so easy~。
我们再来看一下如何使用基于词法单元的解析器解析前面我们设计的EBNF文法呢。我在这里先上代码:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
class ExprParsre extends StandardTokenParsers{
lexical.delimiters += ("=",">=","<=",">","<","!=","&&","||","[","]",",","(",")")
lexical.reserved += ("if","then","SUM","COUNT")

def expr: Parser[String] = "if" ~ expr_front ~ "then" ~ expr_back ^^{
case "if" ~ exp1 ~ "then" ~ exp2 => exp2 + " WHERE " +exp1
}

def expr_priority: Parser[String] = opt("(") ~ expr_condition ~ opt(")") ^^{
case Some("(") ~ conditions ~ Some(")") => "(" + conditions +")"
case Some("(") ~ conditions ~ None => "(" + conditions
case None ~ conditions ~ Some(")") => conditions +")"
case None ~ conditions ~ None => conditions
}

def expr_condition: Parser[String] = ident ~ "[" ~ ident ~ "]" ~ ("="|">="|"<="|">"|"<"|"!=") ~ stringLit ^^{
case ident1~"["~ident2~"]"~"="~stringList => ident1 + "." + ident2 +"='" + stringList +"'"
case ident1~"["~ident2~"]"~">="~stringList => ident1 + "." + ident2 +">='" + stringList +"'"
case ident1~"["~ident2~"]"~"<="~stringList => ident1 + "." + ident2 +"<='" + stringList +"'"
case ident1~"["~ident2~"]"~">"~stringList => ident1 + "." + ident2 +">'" + stringList +"'"
case ident1~"["~ident2~"]"~"<"~stringList => ident1 + "." + ident2 +"<'" + stringList +"'"
case ident1~"["~ident2~"]"~"!="~stringList => ident1 + "." + ident2 +"!='" + stringList +"'"
}
def comparator: Parser[String] = ("&&"|"||") ^^{
case "&&" => " AND "
case "||" => " OR "
}
def expr_front: Parser[String] = expr_priority ~ rep(comparator ~ expr_priority) ^^{
case exp1 ~ exp2 => exp1 + exp2.map(x =>{x._1 + " " + x._2}).mkString(" ")
}
def expr_back: Parser[String] = ident ~ "[" ~ ident ~ "," ~ ("SUM"|"COUNT") ~ "]" ^^ {
case ident1~"["~ident2~","~"COUNT"~"]" => "SELECT COUNT("+ ident2.toString() +") FROM " + ident1.toString()
case ident1~"["~ident2~","~"SUM"~"]" => "SELECT SUM("+ ident2.toString() +") FROM " + ident1.toString()
}

def parserAll[T]( p : Parser[T], input :String) = {
phrase(p)( new lexical.Scanner(input))
}

}

本文标题:使用Scala基于词法单元的解析器定制EBNF范式文法解析

文章作者:zhkmxx930

发布时间:2019年01月05日 - 15:01

最后更新:2019年01月25日 - 09:01

原始链接:https://zhkmxx9302013.github.io/post/33864.html

许可协议: 署名-非商业性使用-禁止演绎 4.0 国际 转载请保留原文链接及作者。

一分钱也是爱,mua~