Kyuubi is an enhanced edition of the Apache Spark’s primordial
Thrift JDBC/ODBC Server.
It is mainly designed for directly running SQL towards a cluster with all components including HDFS, YARN, Hive MetaStore,
and itself secured. The main purpose of Kyuubi is to realize an architecture that can not only speed up SQL queries using
Spark SQL Engine, and also be compatible with the HiveServer2’s behavior as much as possible. Thus, Kyuubi use the same protocol
of HiveServer2, which can be found at HiveServer2 Thrift API
as the client-server communication mechanism, and a user session level
SparkContext instantiating / registering / caching / recycling
mechanism to implement multi-tenant functionality.
Because Kyuubi use the same protocol of HiveServer2, it supports all kinds of JDBC/ODBC clients, and user applications written based
on this Thrift API as shown in the picture above. Cat Tom can use various types of clients to create connections with the Kyuubi Server,
and each connection is bound to a
SparkSession instance which also contains a independent
HiveMetaStoreClient to interact with Hive MetaStore
Server. Tom can set session level configurations for each connection without affecting each other.
Kyuubi does not occupy any resources from the Cluster Manager(Yarn) during startup, and will give all resources back to Yarn if there
is not any active session interacting with a
SparkContext. And also with the ability of Spark Dynamic Resource Allocation,
it also allows us to dynamically allocating resources within a
SparkContext a.k.a a Yarn Application.
spark.executor.cores/memory, to be set in the connection string which will be used to initialize SparkContext.
Kyuubi implements a
SparkSessionCacheManager to control
SparkContext for instantiating, registering,
caching, reusing, and recycling. Different user has one and only one
SparkContext instance in Kyuubi Server after it connects
to the server for the first time, which will be cached in
SparkSessionCacheManager for the whole connection life time and
a while after all connections closed.
All connections belong to the same user shares this
SparkContext to generate their own
Spark provides a mechanism to dynamically adjust the resources your application occupies based on the workload. It means that your application may give resources back to the cluster if they are no longer used and request them again later when there is demand. This feature is particularly useful if multiple applications share resources in your Spark cluster.
Please refer to Dynamic Resource Allocation to see more.
Please refer to Dynamic Allocation Configuration to learn how to configure.
With these features, Kyuubi allows us to use computing resources more efficiently.
Please refer to the Authentication/Security Guide in the online documentation for an overview on how to enable security for Kyuubi.
Kyuubi can be integrated with Spark Authorizer to offer row/column level access control. Kyuubi does not explicitly support spark-authorizer plugin yet, here is an example you may refer to Spark Branch Authorized
Multiple Kyuubi Server instances can register themselves with ZooKeeper when
spark.kyuubi.ha.enabled=true and then
the clients can find a Kyuubi Server through ZooKeeper. When a client requests a server instance, ZooKeeper randomly returns
a selected registered one. This feature offers:
|false||Whether KyuubiServer supports dynamic service discovery for its clients. To support this, each instance of KyuubiServer currently uses ZooKeeper to register itself, when it is brought up. JDBC/ODBC clients should use the ZooKeeper ensemble: spark.kyuubi.ha.zk.quorum in their connection string.|
|none||Comma separated list of ZooKeeper servers to talk to, when KyuubiServer supports service discovery via Zookeeper.|
|kyuubiserver||The parent node in ZooKeeper used by KyuubiServer when supporting dynamic service discovery.|
Kyuubi’s internal is very simple to understand, which is shown as the picture below. We may take about it more detailly later.