import org.apache.spark.sql._
Trigger
Trigger
is used to define how often a streaming query should be executed to produce results.
Note
|
Trigger is a sealed trait so all available implementations are in the same file Trigger.scala.
|
Note
|
A trigger can also be considered a batch (as in Spark Streaming). |
Import org.apache.spark.sql
to work with Trigger
and the only implementation ProcessingTime.
Note
|
It was introduced in the commit for [SPARK-14176][SQL] Add DataFrameWriter.trigger to set the stream batch period. |
ProcessingTime
ProcessingTime
is the only available implementation of Trigger
sealed trait. It assumes that milliseconds is the minimum time unit.
You can create an instance of ProcessingTime
using the following constructors:
-
ProcessingTime(Long)
that accepts non-negative values that represent milliseconds.ProcessingTime(10)
-
ProcessingTime(interval: String)
orProcessingTime.create(interval: String)
that acceptCalendarInterval
instances with or without leadinginterval
string.ProcessingTime("10 milliseconds") ProcessingTime("interval 10 milliseconds")
-
ProcessingTime(Duration)
that acceptsscala.concurrent.duration.Duration
instances.ProcessingTime(10.seconds)
-
ProcessingTime.create(interval: Long, unit: TimeUnit)
forLong
andjava.util.concurrent.TimeUnit
instances.ProcessingTime.create(10, TimeUnit.SECONDS)