我认为这个功能不受支持。PySpark中的另一个解决方案是使用JDBC驱动程序,我确实尝试过。我尝试了以下方法:
es_df = spark.read.jdbc(url="jdbc:es://http://192.168.1.71:9200", table = "(select * from eg_flight) mytable")
Py4JJavaError: An error occurred while calling o2488.jdbc.
: java.sql.SQLFeatureNotSupportedException: Found 1 problem(s)
line 1:8: Unexecutable item
...
另一种方法是使用核心Python和请求,但我不建议对大型数据集使用它。
import requests as r
import json
es_template = {
"query": "select * from eg_flight"
}
es_link = "http://192.168.1.71:9200/_xpack/sql"
headers = {'Content-type': 'application/json'}
if __name__ == "__main__":
load = r.post(es_link, data=json.dumps(es_template), headers=headers)
if load.status_code == 200:
load = load.json()
#do something with it