KafkaDataConsumer Contract
KafkaDataConsumer is the contract for KafkaDataConsumers that use an InternalKafkaConsumer for the following:
KafkaDataConsumer has to be released explicitly.
|
1 2 3 4 5 6 7 8 9 10 11 12 |
package org.apache.spark.sql.kafka010 sealed trait KafkaDataConsumer { // only required properties (vals and methods) that have no implementation // the others follow def internalConsumer: InternalKafkaConsumer def release(): Unit } |
| Property | Description |
|---|---|
|
|
|
|
|
|
| KafkaDataConsumer | Description |
|---|---|
|
|
|
|
|
|
Note
|
KafkaDataConsumer is a Scala sealed trait which means that all the implementations are in the same compilation unit (a single file).
|
Getting Single Kafka ConsumerRecord — get Method
|
1 2 3 4 5 6 7 8 9 |
get( offset: Long, untilOffset: Long, pollTimeoutMs: Long, failOnDataLoss: Boolean): ConsumerRecord[Array[Byte], Array[Byte]] |
get simply requests the InternalKafkaConsumer to get a single Kafka ConsumerRecord.
|
Note
|
|
Getting Single AvailableOffsetRange — getAvailableOffsetRange Method
|
1 2 3 4 5 |
getAvailableOffsetRange(): AvailableOffsetRange |
getAvailableOffsetRange simply requests the InternalKafkaConsumer to get a single AvailableOffsetRange.
|
Note
|
|
spark技术分享