How to run multiple scala operations on one Spark engine? #6853
-
We want to run mulitple scala operations (in different sessions) on one Spark engine. We find Kyuubi doesn't support it by default. Only one operation is in RUNNING state and others are pending. I don't know if there is a configuration for that or Kyuubi doesn't support it at all. |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 1 reply
-
I find a PR that add a static lock for all operations, #4150. Can we remove that? |
Beta Was this translation helpful? Give feedback.
-
Situations might be changed with SPARK-45856 and a series of changes related to Spark Connect, @HaoYang670 you are welcome to evaluate them and report the conclusion. |
Beta Was this translation helpful? Give feedback.
No. we have to disable the concurrent scala code interpret. because they share the same spark.repl.class.outputDir, which is a static configuration and unable to change it once the Spark context is initialized.
kyuubi/externals/kyuubi-spark-sql-engine/src/main/scala-2.12/org/apache/kyuubi/engine/spark/repl/KyuubiSparkILoop.scala
Lines 49 to 50 in b265ccb
https://github.com/apache/spark/blob/332efb2eabb2b9383cfcfb0bf633089f38cdb398/core/src/main/scala/org/apache/spark/SparkContext.scala#L497-L500
See …