关注 spark技术分享,
撸spark源码 玩spark最佳实践

User-Defined Functions (UDFs)

UDFs — User-Defined Functions

User-Defined Functions (aka UDF) is a feature of Spark SQL to define new Column-based functions that extend the vocabulary of Spark SQL’s DSL for transforming Datasets.

Important

Use the higher-level standard Column-based functions (with Dataset operators) whenever possible before reverting to developing user-defined functions since UDFs are a blackbox for Spark SQL and it cannot (and does not even try to) optimize them.

As Reynold Xin from the Apache Spark project has once said on Spark’s dev mailing list:

There are simple cases in which we can analyze the UDFs byte code and infer what it is doing, but it is pretty difficult to do in general.

You define a new UDF by defining a Scala function as an input parameter of udf function. It accepts Scala functions of up to 10 input parameters.

You can register UDFs to use in SQL-based query expressions via UDFRegistration (that is available through SparkSession.udf attribute).

You can query for available standard and user-defined functions using the Catalog interface (that is available through SparkSession.catalog attribute).

Note
UDFs play a vital role in Spark MLlib to define new Transformers that are function objects that transform DataFrames into DataFrames by introducing new columns.

udf Functions (in functions object)

org.apache.spark.sql.functions object comes with udf function to let you define a UDF for a Scala function f.

Tip
Define custom UDFs based on “standalone” Scala functions (e.g. toUpperUDF) so you can test the Scala functions using Scala way (without Spark SQL’s “noise”) and once they are defined reuse the UDFs in UnaryTransformers.
赞(0) 打赏
未经允许不得转载:spark技术分享 » User-Defined Functions (UDFs)
分享到: 更多 (0)

关注公众号:spark技术分享

联系我们联系我们

觉得文章有用就打赏一下文章作者

支付宝扫一扫打赏

微信扫一扫打赏