Giter Club home page Giter Club logo

azure-sqldb-spark's Introduction

Updated Jun 2020: This project is not being actively maintained. Instead, Apache Spark Connector for SQL Server and Azure SQL is now available, with support for Python and R bindings, an easier-to use interface to bulk insert data, and many other improvements. We encourage you to actively evaluate and use the new connector.

Spark connector for Azure SQL Databases and SQL Server

Build Status

The Spark connector for Azure SQL Database and SQL Server enables SQL databases, including Azure SQL Databases and SQL Server, to act as input data source or output data sink for Spark jobs. It allows you to utilize real time transactional data in big data analytics and persist results for adhoc queries or reporting.

Comparing to the built-in Spark connector, this connector provides the ability to bulk insert data into SQL databases. It can outperform row by row insertion with 10x to 20x faster performance. The Spark connector for Azure SQL Databases and SQL Server also supports AAD authentication. It allows you securely connecting to your Azure SQL databases from Azure Databricks using your AAD account. It provides similar interfaces with the built-in JDBC connector. It is easy to migrate your existing Spark jobs to use this new connector.

How to connect to Spark using this library

This connector uses Microsoft SQLServer JDBC driver to fetch data from/to the Azure SQL Database. Results are of the DataFrame type.

All connection properties in Microsoft JDBC Driver for SQL Server are supported in this connector. Add connection properties as fields in the com.microsoft.azure.sqldb.spark.config.Config object.

Reading from Azure SQL Database or SQL Server

import com.microsoft.azure.sqldb.spark.config.Config
import com.microsoft.azure.sqldb.spark.connect._

val config = Config(Map(
  "url"            -> "mysqlserver.database.windows.net",
  "databaseName"   -> "MyDatabase",
  "dbTable"        -> "dbo.Clients"
  "user"           -> "username",
  "password"       -> "*********",
  "connectTimeout" -> "5", //seconds
  "queryTimeout"   -> "5"  //seconds
))

val collection = sqlContext.read.sqlDB(config)
collection.show()

Writing to Azure SQL Database or SQL Server

import com.microsoft.azure.sqldb.spark.config.Config
import com.microsoft.azure.sqldb.spark.connect._
 
// Aquire a DataFrame collection (val collection)

val config = Config(Map(
  "url"          -> "mysqlserver.database.windows.net",
  "databaseName" -> "MyDatabase",
  "dbTable"      -> "dbo.Clients"
  "user"         -> "username",
  "password"     -> "*********"
))

import org.apache.spark.sql.SaveMode
collection.write.mode(SaveMode.Append).sqlDB(config)

Pushdown query to Azure SQL Database or SQL Server

For SELECT queries with expected return results, please use Reading from Azure SQL Database using Scala

import com.microsoft.azure.sqldb.spark.config.Config
import com.microsoft.azure.sqldb.spark.query._
val query = """
              |UPDATE Customers
              |SET ContactName = 'Alfred Schmidt', City= 'Frankfurt'
              |WHERE CustomerID = 1;
            """.stripMargin

val config = Config(Map(
  "url"          -> "mysqlserver.database.windows.net",
  "databaseName" -> "MyDatabase",
  "user"         -> "username",
  "password"     -> "*********",
  "queryCustom"  -> query
))

sqlContext.sqlDBQuery(config)

Bulk Copy to Azure SQL Database or SQL Server

import com.microsoft.azure.sqldb.spark.bulkcopy.BulkCopyMetadata
import com.microsoft.azure.sqldb.spark.config.Config
import com.microsoft.azure.sqldb.spark.connect._

/** 
  Add column Metadata.
  If not specified, metadata will be automatically added
  from the destination table, which may suffer performance.
*/
var bulkCopyMetadata = new BulkCopyMetadata
bulkCopyMetadata.addColumnMetadata(1, "Title", java.sql.Types.NVARCHAR, 128, 0)
bulkCopyMetadata.addColumnMetadata(2, "FirstName", java.sql.Types.NVARCHAR, 50, 0)
bulkCopyMetadata.addColumnMetadata(3, "LastName", java.sql.Types.NVARCHAR, 50, 0)

val bulkCopyConfig = Config(Map(
  "url"               -> "mysqlserver.database.windows.net",
  "databaseName"      -> "MyDatabase",
  "user"              -> "username",
  "password"          -> "*********",
  "databaseName"      -> "MyDatabase",
  "dbTable"           -> "dbo.Clients",
  "bulkCopyBatchSize" -> "2500",
  "bulkCopyTableLock" -> "true",
  "bulkCopyTimeout"   -> "600"
))

df.bulkCopyToSqlDB(bulkCopyConfig, bulkCopyMetadata)
//df.bulkCopyToSqlDB(bulkCopyConfig) if no metadata is specified.

Requirements

Official supported versions

Component Versions Supported
Apache Spark 2.0.2 or later
Scala 2.10 or later
Microsoft JDBC Driver for SQL Server 6.2 to 7.4 ^
Microsoft SQL Server SQL Server 2008 or later
Azure SQL Databases Supported

^ Driver version 8.x not tested

Download

Download from Maven

You can download the latest version from here

You can also use the following coordinate to import the library into Azure SQL Databricks: com.microsoft.azure:azure-sqldb-spark:1.0.2

Build this project

Currently, the connector project uses maven. To build the connector without dependencies, you can run:

mvn clean package

Contributing & Feedback

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

To give feedback and/or report an issue, open a GitHub Issue.

Apache®, Apache Spark, and Spark® are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries.

azure-sqldb-spark's People

Contributors

allenwux avatar arvindshmicrosoft avatar denzilribeiro avatar fokko avatar gbrueckl avatar gregl83 avatar ksaliya avatar microsoftopensource avatar msftgits avatar romitgirdhar avatar slyons avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

azure-sqldb-spark's Issues

bulkCopyToSqlDB Table Needs to Already Exist?

Does the destination table in the bulk copy need to exist prior to the operation?

org.apache.spark.SparkException: Job aborted due to stage failure: Task 3 in stage 238.0 failed 4 times, most recent failure: Lost task 3.3 in stage 238.0 (TID 2269, 00.000.00.0, executor 0): com.microsoft.sqlserver.jdbc.SQLServerException: Unable to retrieve column metadata.
at com.microsoft.sqlserver.jdbc.SQLServerBulkCopy.getDestinationMetadata(SQLServerBulkCopy.java:1777)
at com.microsoft.sqlserver.jdbc.SQLServerBulkCopy.writeToServer(SQLServerBulkCopy.java:1676)
at com.microsoft.sqlserver.jdbc.SQLServerBulkCopy.writeToServer(SQLServerBulkCopy.java:669)
at com.microsoft.azure.sqldb.spark.connect.DataFrameFunctions.com$microsoft$azure$sqldb$spark$connect$DataFrameFunctions$$bulkCopy(DataFrameFunctions.scala:127)
at com.microsoft.azure.sqldb.spark.connect.DataFrameFunctions$$anonfun$bulkCopyToSqlDB$1.apply(DataFrameFunctions.scala:72)
at com.microsoft.azure.sqldb.spark.connect.DataFrameFunctions$$anonfun$bulkCopyToSqlDB$1.apply(DataFrameFunctions.scala:72)
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:951)
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:951)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2284)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2284)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.doRunTask(Task.scala:139)
at org.apache.spark.scheduler.Task.run(Task.scala:112)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$13.apply(Executor.scala:497)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1526)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:503)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.microsoft.sqlserver.jdbc.SQLServerException: Invalid object name 'ETL.Dim_SKU_Update6'.
at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:258)

dataframe.bulkCopyToSqlDB exception when using databricks-connect

I have a Spark application written in Scala and I'm using databricks-connect to run the application on my databricks cluster. Everything works flawlessly except this library.

When I try to ´df.bulkCopyToSqlDB(config)`, after some time I get the following exception:

Exception in thread "main" java.lang.AssertionError: assertion failed
	at scala.Predef$.assert(Predef.scala:156)
	at com.databricks.service.SparkServiceClassSync$.checkSynced(SparkServiceClassSync.scala:226)
	at org.apache.spark.sql.util.ProtoSerializer.org$apache$spark$sql$util$ProtoSerializer$$writeReplaceClassDescriptor(ProtoSerializer.scala:2808)
	at org.apache.spark.sql.util.ProtoSerializer$$anon$1.writeClassDescriptor(ProtoSerializer.scala:2799)
	at java.io.ObjectOutputStream.writeNonProxyDesc(ObjectOutputStream.java:1282)
	at java.io.ObjectOutputStream.writeClassDesc(ObjectOutputStream.java:1231)
	at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1427)
	at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
	at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548)
	at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1509)
	at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)
	at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
	at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548)
	at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1509)
	at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)
	at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
	at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348)
	at org.apache.spark.sql.util.ProtoSerializer.serializeShadeAnonFun(ProtoSerializer.scala:2802)
	at org.apache.spark.sql.util.ProtoSerializer.serializeObject(ProtoSerializer.scala:2778)
	at com.databricks.service.SparkServiceRPCClientStub$$anonfun$executeRDD$1.apply(SparkServiceRPCClientStub.scala:256)
	at com.databricks.service.SparkServiceRPCClientStub$$anonfun$executeRDD$1.apply(SparkServiceRPCClientStub.scala:248)
	at com.databricks.spark.util.Log4jUsageLogger.recordOperation(UsageLogger.scala:172)
	at com.databricks.spark.util.UsageLogging$class.recordOperation(UsageLogger.scala:297)
	at com.databricks.service.SparkServiceRPCClientStub.recordOperation(SparkServiceRPCClientStub.scala:60)
	at com.databricks.service.SparkServiceRPCClientStub.executeRDD(SparkServiceRPCClientStub.scala:248)
	at com.databricks.service.SparkClient$.executeRDD(SparkClient.scala:204)
	at com.databricks.spark.util.SparkClientContext$.executeRDD(SparkClientContext.scala:121)
	at org.apache.spark.scheduler.DAGScheduler.submitJob(DAGScheduler.scala:828)
	at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:883)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2245)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2267)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2286)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2311)
	at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:951)
	at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:949)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
	at org.apache.spark.rdd.RDD.withScope(RDD.scala:379)
	at org.apache.spark.rdd.RDD.foreachPartition(RDD.scala:949)
	at org.apache.spark.sql.Dataset$$anonfun$foreachPartition$1.apply$mcV$sp(Dataset.scala:2748)
	at org.apache.spark.sql.Dataset$$anonfun$foreachPartition$1.apply(Dataset.scala:2748)
	at org.apache.spark.sql.Dataset$$anonfun$foreachPartition$1.apply(Dataset.scala:2748)
	at org.apache.spark.sql.Dataset$$anonfun$withNewRDDExecutionId$1.apply(Dataset.scala:3409)
	at org.apache.spark.sql.execution.SQLExecution$$anonfun$withCustomExecutionEnv$1.apply(SQLExecution.scala:111)
	at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:240)
	at org.apache.spark.sql.execution.SQLExecution$.withCustomExecutionEnv(SQLExecution.scala:97)
	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:170)
	at org.apache.spark.sql.Dataset.withNewRDDExecutionId(Dataset.scala:3405)
	at org.apache.spark.sql.Dataset.foreachPartition(Dataset.scala:2747)
	at com.microsoft.azure.sqldb.spark.connect.DataFrameFunctions.bulkCopyToSqlDB(DataFrameFunctions.scala:72)
	at DataLogSqlLoader.bulkCopyLoad(DataLogSqlLoader.scala:34)
	at DataLogSqlLoader.writeTenantDataLogs(DataLogSqlLoader.scala:20)
	at Main$.processSource(Main.scala:133)
	at Main$$anonfun$processSystem$1.apply(Main.scala:81)
	at Main$$anonfun$processSystem$1.apply(Main.scala:79)
	at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
	at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:35)
	at Main$.processSystem(Main.scala:79)
	at Main$$anonfun$processTenant$1.apply(Main.scala:72)
	at Main$$anonfun$processTenant$1.apply(Main.scala:70)
	at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
	at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:35)
	at Main$.processTenant(Main.scala:70)
	at Main$$anonfun$main$1.apply(Main.scala:63)
	at Main$$anonfun$main$1.apply(Main.scala:61)
	at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
	at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:35)
	at Main$.main(Main.scala:61)
	at Main.main(Main.scala)
Disconnected from the target VM, address: '127.0.0.1:58335', transport: 'socket'

Process finished with exit code 1

The same code seems to work fine when executing it from a Databricks scala notebook.
Is this an issue with databricks-connector or this library?

Running Stored Proc using the library

Hi,

We have a use case, where we need the results of a stored proc, in a dataframe,

So I created a query, and tried using a custom query, but that did not work out.

Error: com.microsoft.sqlserver.jdbc.SQLServerException: "" is not a recognized table hints option. If it is intended as a parameter to a table-valued function or to the CHANGETABLE function, ensure that your database compatibility mode is set to 90.

Can someone help?

Matching bulk copy metadata with target table metadata

Version information:

  • Databricks Runtime version 6.2
  • Apache Spark version 2.4.4
  • Scala version 2.11
  • com.microsoft.azure:azure-sqldb-spark:1.0.2
  • com.microsoft.azure:adal4j:1.6.4

Issue:
I'm trying to use the bulkCopyToSqlDB Dataframe function with target table metadata that I have defined, following this example: https://docs.microsoft.com/en-gb/azure/sql-database/sql-database-spark-connector#write-data-to-azure-sql-database-or-sql-server-using-bulk-insert .

However, I'm having a hard time understanding how to match the metadata definition with the target table's actual metadata. Connectivity and automatic metadata discovery work fine, so the issue is with the manual metadata definition.

I would request:

  • Additional documentation how to match the libraries metadata definitions with SQL DB data types
  • And / or a more informative error message when the metadata definition does not match the target table definition

Example Azure SQL Database table:

CREATE TABLE exampletable (
    guid int,
    name varchar(255),
    abbreviation varchar(255),
    commune varchar(255),
    level varchar(255),
    _loaded datetime
);

Example code:

import com.microsoft.azure.sqldb.spark.bulkcopy.BulkCopyMetadata
import com.microsoft.azure.sqldb.spark.config.Config
import com.microsoft.azure.sqldb.spark.connect._

/* 
  Add column Metadata.
  If not specified, metadata is automatically added
  from the destination table, which may suffer performance.
*/
var bulkCopyMetadata = new BulkCopyMetadata
bulkCopyMetadata.addColumnMetadata(1, "guid", java.sql.Types.INTEGER, 128, 0)
bulkCopyMetadata.addColumnMetadata(2, "name", java.sql.Types.VARCHAR, 255, 0)
bulkCopyMetadata.addColumnMetadata(3, "abbreviation", java.sql.Types.VARCHAR, 255, 0)
bulkCopyMetadata.addColumnMetadata(4, "level", java.sql.Types.VARCHAR, 255, 0)
bulkCopyMetadata.addColumnMetadata(5, "_loaded", java.sql.Types.TIMESTAMP, 0, 0)

val bulkCopyConfig = Config(Map(
   "url"                   -> "examplesqlserver.database.windows.net",
   "databaseName"          -> "examplesqldatabase",
   "dbTable"               -> "dbo.exampletable",
   "accessToken"           -> accessToken,
   "hostNameInCertificate" -> "*.database.windows.net",
   "encrypt"               -> "true",
   "bulkCopyBatchSize"     -> "2500",
   "bulkCopyTableLock"     -> "true",
   "bulkCopyTimeout"       -> "600"
))

//Use this write command if metadata is specified
multiprimusOutputDf.bulkCopyToSqlDB(bulkCopyConfig, bulkCopyMetadata)

Actual response:

org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 230.0 failed 4 times, most recent failure: Lost task 1.3 in stage 230.0 (TID 4042, 10.0.3.4, executor 0): com.microsoft.sqlserver.jdbc.SQLServerException: Source and destination schemas do not match.

Desired response:

  • Give some indication which metadata do not match and how to fix
  • Or provide some documentation how to match JDBC data types with SQL DB data types

Edit 16.1.

  • Reworded the desired response section a bit

Bulk copy issues withe numeric data type

Using the dataset I'm able to perform a basic insert against the dataframe with no issue. However, when switching the command to use bulkCopyToSQLDB I experience issues with the data type: com.microsoft.sqlserver.jdbc.SQLServerException: One or more values is out of range of values for the decimal SQL Server data type

Narrowing it down to just one column for troubleshooting the max value in the column is 1328569.91 and the column definition is numeric(15,6). Is there something to be aware of when working with Azure SQL DW?

Breaking Connection with 20 TB's of data

18/09/03 14:20:55 INFO Client:
client token: N/A
diagnostics: User class threw exception: com.microsoft.sqlserver.jdbc.SQLServerException: The TCP/IP connection to the host ZUSSCNTP2DBM01.dev.cluster.com, port 1433 has failed. Error: "Connection timed out: no further information.. Verify the connection properties. Make sure that an instance of SQL Server is running on the host and accepting TCP/IP connections at the port. Make sure that TCP connections to the port are not blocked by a firewall.".
at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDriverError(SQLServerException.java:228)
at com.microsoft.sqlserver.jdbc.SQLServerException.ConvertConnectExceptionToSQLServerException(SQLServerException.java:285)
at com.microsoft.sqlserver.jdbc.SocketFinder.findSocket(IOBuffer.java:2437)
at com.microsoft.sqlserver.jdbc.TDSChannel.open(IOBuffer.java:641)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.connectHelper(SQLServerConnection.java:2245)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.login(SQLServerConnection.java:1921)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.connectInternal(SQLServerConnection.java:1762)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.connect(SQLServerConnection.java:1077)
at com.microsoft.sqlserver.jdbc.SQLServerDriver.connect(SQLServerDriver.java:623)
at org.apache.spark.sql.execution.datasources.jdbc.DriverWrapper.connect(DriverWrapper.scala:45)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$createConnectionFactory$1.apply(JdbcUtils.scala:63)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$createConnectionFactory$1.apply(JdbcUtils.scala:54)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:56)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation.(JDBCRelation.scala:115)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:52)
at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:340)
at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:239)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:227)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:164)
at org.apache.spark.sql.DataFrameReader.jdbc(DataFrameReader.scala:254)
at com.microsoft.azure.sqldb.spark.connect.DataFrameReaderFunctions.sqlDB(DataFrameReaderFunctions.scala:44)
at com.sample.spark.invoice.ClientInvoiceHeader$.main(ClientInvoiceHeader.scala:111)
at com.sample.spark.invoice.ClientInvoiceHeader.main(ClientInvoiceHeader.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$4.run(ApplicationMaster.scala:721)

 ApplicationMaster host: 10.44.81.98
 ApplicationMaster RPC port: 0
 queue: default
 start time: 1535972791147
 final status: FAILED
 tracking URL: http://research.dev.cluster.com:8088/proxy/application_1535626002733_0086/
 user: spark

Exception in thread "main" org.apache.spark.SparkException: Application application_1535626002733_0086 finished with failed status
at org.apache.spark.deploy.yarn.Client.run(Client.scala:1230)
at org.apache.spark.deploy.yarn.YarnClusterApplication.start(Client.scala:1585)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:906)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:197)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:227)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:136)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

AAD authentication with password throws java.lang.NoClassDefFoundError: com/microsoft/aad/adal4j/AuthenticationException

The ActiveDirectoryPassword auth method throws
java.lang.NoClassDefFoundError: com/microsoft/aad/adal4j/AuthenticationException

import com.microsoft.azure.sqldb.spark.config.Config
import com.microsoft.azure.sqldb.spark.connect._

val config = Config(Map(
  "url"            -> "aaa.database.secure.windows.net",
  "databaseName"   -> "bbb",
  "dbTable"        -> "ccc.ddd",
  "user"           -> "[email protected]",
  "password"       -> "****",
  "authentication" -> "ActiveDirectoryPassword",
  "trustServerCertificate" -> "true",
  "encrypt"        -> "false"
))

val collection = sqlContext.read.sqlDB(config)
collection.show()

Trace:

at com.microsoft.sqlserver.jdbc.SQLServerConnection.getFedAuthToken(SQLServerConnection.java:3609)
	at com.microsoft.sqlserver.jdbc.SQLServerConnection.onFedAuthInfo(SQLServerConnection.java:3580)
	at com.microsoft.sqlserver.jdbc.SQLServerConnection.processFedAuthInfo(SQLServerConnection.java:3548)
	at com.microsoft.sqlserver.jdbc.TDSTokenHandler.onFedAuthInfo(tdsparser.java:261)
	at com.microsoft.sqlserver.jdbc.TDSParser.parse(tdsparser.java:103)
	at com.microsoft.sqlserver.jdbc.SQLServerConnection.sendLogon(SQLServerConnection.java:4290)
	at com.microsoft.sqlserver.jdbc.SQLServerConnection.logon(SQLServerConnection.java:3157)
	at com.microsoft.sqlserver.jdbc.SQLServerConnection.access$100(SQLServerConnection.java:82)
	at com.microsoft.sqlserver.jdbc.SQLServerConnection$LogonCommand.doExecute(SQLServerConnection.java:3121)
	at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:7151)
	at com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:2478)
	at com.microsoft.sqlserver.jdbc.SQLServerConnection.connectHelper(SQLServerConnection.java:2026)
	at com.microsoft.sqlserver.jdbc.SQLServerConnection.login(SQLServerConnection.java:1687)
	at com.microsoft.sqlserver.jdbc.SQLServerConnection.connectInternal(SQLServerConnection.java:1528)
	at com.microsoft.sqlserver.jdbc.SQLServerConnection.connect(SQLServerConnection.java:866)
	at com.microsoft.sqlserver.jdbc.SQLServerDriver.connect(SQLServerDriver.java:569)
	at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$createConnectionFactory$1.apply(JdbcUtils.scala:63)
	at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$createConnectionFactory$1.apply(JdbcUtils.scala:54)
	at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:56)
	at org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation.<init>(JDBCRelation.scala:115)
	at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:52)
	at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:358)
	at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:303)
	at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:291)
	at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:196)
	at org.apache.spark.sql.DataFrameReader.jdbc(DataFrameReader.scala:318)
	at com.microsoft.azure.sqldb.spark.connect.DataFrameReaderFunctions.sqlDB(DataFrameReaderFunctions.scala:44)
	at line32d15d7814324082a779de78e99456e025.$read$$iw$$iw$$iw$$iw$$iw$$iw.<init>(command-3937262568293186:15)
	at line32d15d7814324082a779de78e99456e025.$read$$iw$$iw$$iw$$iw$$iw.<init>(command-3937262568293186:63)
	at line32d15d7814324082a779de78e99456e025.$read$$iw$$iw$$iw$$iw.<init>(command-3937262568293186:65)
	at line32d15d7814324082a779de78e99456e025.$read$$iw$$iw$$iw.<init>(command-3937262568293186:67)
	at line32d15d7814324082a779de78e99456e025.$read$$iw$$iw.<init>(command-3937262568293186:69)
	at line32d15d7814324082a779de78e99456e025.$read$$iw.<init>(command-3937262568293186:71)
	at line32d15d7814324082a779de78e99456e025.$read.<init>(command-3937262568293186:73)
	at line32d15d7814324082a779de78e99456e025.$read$.<init>(command-3937262568293186:77)
	at line32d15d7814324082a779de78e99456e025.$read$.<clinit>(command-3937262568293186)
	at line32d15d7814324082a779de78e99456e025.$eval$.$print$lzycompute(<notebook>:7)
	at line32d15d7814324082a779de78e99456e025.$eval$.$print(<notebook>:6)
	at line32d15d7814324082a779de78e99456e025.$eval.$print(<notebook>)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:786)
	at scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:1047)
	at scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:638)
	at scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:637)
	at scala.reflect.internal.util.ScalaClassLoader$class.asContext(ScalaClassLoader.scala:31)
	at scala.reflect.internal.util.AbstractFileClassLoader.asContext(AbstractFileClassLoader.scala:19)
	at scala.tools.nsc.interpreter.IMain$WrappedRequest.loadAndRunReq(IMain.scala:637)
	at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:569)
	at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:565)
	at com.databricks.backend.daemon.driver.DriverILoop.execute(DriverILoop.scala:199)
	at com.databricks.backend.daemon.driver.ScalaDriverLocal$$anonfun$repl$1.apply$mcV$sp(ScalaDriverLocal.scala:189)
	at com.databricks.backend.daemon.driver.ScalaDriverLocal$$anonfun$repl$1.apply(ScalaDriverLocal.scala:189)
	at com.databricks.backend.daemon.driver.ScalaDriverLocal$$anonfun$repl$1.apply(ScalaDriverLocal.scala:189)
	at com.databricks.backend.daemon.driver.DriverLocal$TrapExitInternal$.trapExit(DriverLocal.scala:493)
	at com.databricks.backend.daemon.driver.DriverLocal$TrapExit$.apply(DriverLocal.scala:448)
	at com.databricks.backend.daemon.driver.ScalaDriverLocal.repl(ScalaDriverLocal.scala:189)
	at com.databricks.backend.daemon.driver.DriverLocal$$anonfun$execute$3.apply(DriverLocal.scala:248)
	at com.databricks.backend.daemon.driver.DriverLocal$$anonfun$execute$3.apply(DriverLocal.scala:228)
	at com.databricks.logging.UsageLogging$$anonfun$withAttributionContext$1.apply(UsageLogging.scala:188)
	at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
	at com.databricks.logging.UsageLogging$class.withAttributionContext(UsageLogging.scala:183)
	at com.databricks.backend.daemon.driver.DriverLocal.withAttributionContext(DriverLocal.scala:40)
	at com.databricks.logging.UsageLogging$class.withAttributionTags(UsageLogging.scala:221)
	at com.databricks.backend.daemon.driver.DriverLocal.withAttributionTags(DriverLocal.scala:40)
	at com.databricks.backend.daemon.driver.DriverLocal.execute(DriverLocal.scala:228)
	at com.databricks.backend.daemon.driver.DriverWrapper$$anonfun$tryExecutingCommand$2.apply(DriverWrapper.scala:595)
	at com.databricks.backend.daemon.driver.DriverWrapper$$anonfun$tryExecutingCommand$2.apply(DriverWrapper.scala:595)
	at scala.util.Try$.apply(Try.scala:192)
	at com.databricks.backend.daemon.driver.DriverWrapper.tryExecutingCommand(DriverWrapper.scala:590)
	at com.databricks.backend.daemon.driver.DriverWrapper.getCommandOutputAndError(DriverWrapper.scala:474)
	at com.databricks.backend.daemon.driver.DriverWrapper.executeCommand(DriverWrapper.scala:548)
	at com.databricks.backend.daemon.driver.DriverWrapper.runInnerLoop(DriverWrapper.scala:380)
	at com.databricks.backend.daemon.driver.DriverWrapper.runInner(DriverWrapper.scala:327)
	at com.databricks.backend.daemon.driver.DriverWrapper.run(DriverWrapper.scala:215)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassNotFoundException: com.microsoft.aad.adal4j.AuthenticationException
	at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:338)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
	at com.microsoft.sqlserver.jdbc.SQLServerConnection.getFedAuthToken(SQLServerConnection.java:3609)
	at com.microsoft.sqlserver.jdbc.SQLServerConnection.onFedAuthInfo(SQLServerConnection.java:3580)
	at com.microsoft.sqlserver.jdbc.SQLServerConnection.processFedAuthInfo(SQLServerConnection.java:3548)
	at com.microsoft.sqlserver.jdbc.TDSTokenHandler.onFedAuthInfo(tdsparser.java:261)
	at com.microsoft.sqlserver.jdbc.TDSParser.parse(tdsparser.java:103)
	at com.microsoft.sqlserver.jdbc.SQLServerConnection.sendLogon(SQLServerConnection.java:4290)
	at com.microsoft.sqlserver.jdbc.SQLServerConnection.logon(SQLServerConnection.java:3157)
	at com.microsoft.sqlserver.jdbc.SQLServerConnection.access$100(SQLServerConnection.java:82)
	at com.microsoft.sqlserver.jdbc.SQLServerConnection$LogonCommand.doExecute(SQLServerConnection.java:3121)
	at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:7151)
	at com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:2478)
	at com.microsoft.sqlserver.jdbc.SQLServerConnection.connectHelper(SQLServerConnection.java:2026)
	at com.microsoft.sqlserver.jdbc.SQLServerConnection.login(SQLServerConnection.java:1687)
	at com.microsoft.sqlserver.jdbc.SQLServerConnection.connectInternal(SQLServerConnection.java:1528)
	at com.microsoft.sqlserver.jdbc.SQLServerConnection.connect(SQLServerConnection.java:866)
	at com.microsoft.sqlserver.jdbc.SQLServerDriver.connect(SQLServerDriver.java:569)
	at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$createConnectionFactory$1.apply(JdbcUtils.scala:63)
	at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$createConnectionFactory$1.apply(JdbcUtils.scala:54)
	at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:56)
	at org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation.<init>(JDBCRelation.scala:115)
	at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:52)
	at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:358)
	at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:303)
	at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:291)
	at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:196)
	at org.apache.spark.sql.DataFrameReader.jdbc(DataFrameReader.scala:318)
	at com.microsoft.azure.sqldb.spark.connect.DataFrameReaderFunctions.sqlDB(DataFrameReaderFunctions.scala:44)
	at line32d15d7814324082a779de78e99456e025.$read$$iw$$iw$$iw$$iw$$iw$$iw.<init>(command-3937262568293186:15)
	at line32d15d7814324082a779de78e99456e025.$read$$iw$$iw$$iw$$iw$$iw.<init>(command-3937262568293186:63)
	at line32d15d7814324082a779de78e99456e025.$read$$iw$$iw$$iw$$iw.<init>(command-3937262568293186:65)
	at line32d15d7814324082a779de78e99456e025.$read$$iw$$iw$$iw.<init>(command-3937262568293186:67)
	at line32d15d7814324082a779de78e99456e025.$read$$iw$$iw.<init>(command-3937262568293186:69)
	at line32d15d7814324082a779de78e99456e025.$read$$iw.<init>(command-3937262568293186:71)
	at line32d15d7814324082a779de78e99456e025.$read.<init>(command-3937262568293186:73)
	at line32d15d7814324082a779de78e99456e025.$read$.<init>(command-3937262568293186:77)
	at line32d15d7814324082a779de78e99456e025.$read$.<clinit>(command-3937262568293186)
	at line32d15d7814324082a779de78e99456e025.$eval$.$print$lzycompute(<notebook>:7)
	at line32d15d7814324082a779de78e99456e025.$eval$.$print(<notebook>:6)
	at line32d15d7814324082a779de78e99456e025.$eval.$print(<notebook>)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:786)
	at scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:1047)
	at scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:638)
	at scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:637)
	at scala.reflect.internal.util.ScalaClassLoader$class.asContext(ScalaClassLoader.scala:31)
	at scala.reflect.internal.util.AbstractFileClassLoader.asContext(AbstractFileClassLoader.scala:19)
	at scala.tools.nsc.interpreter.IMain$WrappedRequest.loadAndRunReq(IMain.scala:637)
	at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:569)
	at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:565)
	at com.databricks.backend.daemon.driver.DriverILoop.execute(DriverILoop.scala:199)
	at com.databricks.backend.daemon.driver.ScalaDriverLocal$$anonfun$repl$1.apply$mcV$sp(ScalaDriverLocal.scala:189)
	at com.databricks.backend.daemon.driver.ScalaDriverLocal$$anonfun$repl$1.apply(ScalaDriverLocal.scala:189)
	at com.databricks.backend.daemon.driver.ScalaDriverLocal$$anonfun$repl$1.apply(ScalaDriverLocal.scala:189)
	at com.databricks.backend.daemon.driver.DriverLocal$TrapExitInternal$.trapExit(DriverLocal.scala:493)
	at com.databricks.backend.daemon.driver.DriverLocal$TrapExit$.apply(DriverLocal.scala:448)
	at com.databricks.backend.daemon.driver.ScalaDriverLocal.repl(ScalaDriverLocal.scala:189)
	at com.databricks.backend.daemon.driver.DriverLocal$$anonfun$execute$3.apply(DriverLocal.scala:248)
	at com.databricks.backend.daemon.driver.DriverLocal$$anonfun$execute$3.apply(DriverLocal.scala:228)
	at com.databricks.logging.UsageLogging$$anonfun$withAttributionContext$1.apply(UsageLogging.scala:188)
	at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
	at com.databricks.logging.UsageLogging$class.withAttributionContext(UsageLogging.scala:183)
	at com.databricks.backend.daemon.driver.DriverLocal.withAttributionContext(DriverLocal.scala:40)
	at com.databricks.logging.UsageLogging$class.withAttributionTags(UsageLogging.scala:221)
	at com.databricks.backend.daemon.driver.DriverLocal.withAttributionTags(DriverLocal.scala:40)
	at com.databricks.backend.daemon.driver.DriverLocal.execute(DriverLocal.scala:228)
	at com.databricks.backend.daemon.driver.DriverWrapper$$anonfun$tryExecutingCommand$2.apply(DriverWrapper.scala:595)
	at com.databricks.backend.daemon.driver.DriverWrapper$$anonfun$tryExecutingCommand$2.apply(DriverWrapper.scala:595)
	at scala.util.Try$.apply(Try.scala:192)
	at com.databricks.backend.daemon.driver.DriverWrapper.tryExecutingCommand(DriverWrapper.scala:590)
	at com.databricks.backend.daemon.driver.DriverWrapper.getCommandOutputAndError(DriverWrapper.scala:474)
	at com.databricks.backend.daemon.driver.DriverWrapper.executeCommand(DriverWrapper.scala:548)
	at com.databricks.backend.daemon.driver.DriverWrapper.runInnerLoop(DriverWrapper.scala:380)
	at com.databricks.backend.daemon.driver.DriverWrapper.runInner(DriverWrapper.scala:327)
	at com.databricks.backend.daemon.driver.DriverWrapper.run(DriverWrapper.scala:215)
	at java.lang.Thread.run(Thread.java:748)

Ability to perform a merge

It would be helpful for some work types to have the ability to perform a merge. We are currently using the Azure Databricks Delta to do merge, but it would be helpful if we could do the same to our Azure SQL Server instance and then utilize indexes.

Unable to connect to SQL Server instance of an Azure VM from Azure Databricks - TCP/IP connection on port 1433 failed

I am getting the below error message, every time I try to connect to a SQL Server instance hosted on an Azure VM from Azure Databricks. The VM name goes something like azeunxyz001.juuw.net. (modified)

Please refer to the image below:-
issue

I already tried the following resolutions that I found on the internet, but none of them worked.

  1. Asking the DBAs to open port 1433 from SQL Server Configuration Manager
  2. Asked the Infrastructure team and got confirmation that the firewall was turned off for this VM and that the port 1433 is not blocked.

What else am I missing?

Error message:-
com.microsoft.sqlserver.jdbc.SQLServerException: The TCP/IP connection to the host azeuncpmd001.guww.net, port 1433 has failed. Error: "azeunxyz001.juuw.net. Verify the connection properties. Make sure that an instance of SQL Server is running on the host and accepting TCP/IP connections at the port. Make sure that TCP connections to the port are not blocked by a firewall."

No exception thrown in sqlContext.sqlDBQuery

In sqlDBQuery, when an exception occurs the error doesn't get thrown but instead gets printed out. IMO it would be better to close the connection and throw an exception so the user can handle this properly.

Bulk copy stuck indefinitely on 1 task

I try to use bulkCopyToSqlDB to export data to a SQL Server database, but the job gets stuck indefinitely at 199/200 tasks completed in the last stage. The table that was being copied into also becomes unusable/unresponsive afterwards.

The code I use is almost exactly this:

val df = readCSVFilesFromHDFS().repartition(200).cache()

val stringColumnTypes = "column3 VARCHAR(255), column5 VARCHAR(255)"

// first make sure the table exists, with the correct column types
// and is properly cleaned up if necessary
df.limit(0)
  .write
  .option("createTableColumnTypes", stringColumnTypes)
  .mode(SaveMode.Overwrite)
  .jdbc(jdbcUrl, table, properties)

val sqlConfig = Config(Map(
  "url"            -> "***",
  "databaseName"   -> "***",
  "dbTable"        -> "***",
  "user"           -> "***",
  "password"       -> "***",
  "driver"         -> "com.microsoft.sqlserver.jdbc.SQLServerDriver"
))

df.bulkCopyToSqlDB(sqlConfig)

The Spark UI web interface of the corresponding application also seems to stop getting updated once the bulkCopyToSqlDB job starts.
The application gets started with --master yarn and default --deploy-mode (client).

object java.lang.Object in compiler mirror not found

Hi,
I am getting this error when trying to use the uberjar in a Spark 2.3 REPL:

This has probably already been fixed so I am looking forward to the MAVEN release (also so I can get compile-time safety).

Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
[init] error: error while loading <root>, Error accessing /opt/spark/jars/azure-sqldb-spark-1.0.0-uber.jar

Failed to initialize compiler: object java.lang.Object in compiler mirror not found.
** Note that as of 2.8 scala does not assume use of the java classpath.
** For the old behavior pass -usejavacp to scala, or if using a Settings
** object programmatically, settings.usejavacp.value = true.

Failed to initialize compiler: object java.lang.Object in compiler mirror not found.
** Note that as of 2.8 scala does not assume use of the java classpath.
** For the old behavior pass -usejavacp to scala, or if using a Settings
** object programmatically, settings.usejavacp.value = true.
Exception in thread "main" java.lang.NullPointerException
	at scala.reflect.internal.SymbolTable.exitingPhase(SymbolTable.scala:256)
	at scala.tools.nsc.interpreter.IMain$Request.x$20$lzycompute(IMain.scala:896)
	at scala.tools.nsc.interpreter.IMain$Request.x$20(IMain.scala:895)
	at scala.tools.nsc.interpreter.IMain$Request.headerPreamble$lzycompute(IMain.scala:895)
	at scala.tools.nsc.interpreter.IMain$Request.headerPreamble(IMain.scala:895)
	at scala.tools.nsc.interpreter.IMain$Request$Wrapper.preamble(IMain.scala:918)
	at scala.tools.nsc.interpreter.IMain$CodeAssembler$$anonfun$apply$23.apply(IMain.scala:1337)
	at scala.tools.nsc.interpreter.IMain$CodeAssembler$$anonfun$apply$23.apply(IMain.scala:1336)
	at scala.tools.nsc.util.package$.stringFromWriter(package.scala:64)
	at scala.tools.nsc.interpreter.IMain$CodeAssembler$class.apply(IMain.scala:1336)
	at scala.tools.nsc.interpreter.IMain$Request$Wrapper.apply(IMain.scala:908)
	at scala.tools.nsc.interpreter.IMain$Request.compile$lzycompute(IMain.scala:1002)
	at scala.tools.nsc.interpreter.IMain$Request.compile(IMain.scala:997)
	at scala.tools.nsc.interpreter.IMain.compile(IMain.scala:579)
	at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:567)
	at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:565)
	at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:807)
	at scala.tools.nsc.interpreter.ILoop.command(ILoop.scala:681)
	at scala.tools.nsc.interpreter.ILoop.processLine(ILoop.scala:395)

Error while writing dataframe to Sql Table: [com.microsoft.sqlserver.jdbc.SQLServerException: The connection is closed.]

The spark dataframe is successfully written to the Sql table using df.write.mode(SaveMode.Append).sqlDB(config) API, but it fails with the error below:

org.apache.spark.SparkException: Job aborted due to stage failure: Task 61 in stage 72.0 failed 4 times, most recent failure: Lost task 61.3 in stage 72.0 (TID 1808, 10.139.64.58, executor 26): com.microsoft.sqlserver.jdbc.SQLServerException: The connection is closed.

Here is my code:

val config = Config(Map(
  "url"            -> gSqlServerUrl,
  "databaseName"   -> gSqlDatabaseName,
   "dbTable"       -> gSqlTableName,
  "user"           -> username,
  "password"       -> password,
  "connectTimeout" -> "60", // Getting error even with high timeout value
  "queryTimeout"   -> "60" 
)) 

val dataDF= spark.read.table("myDb.myTable")
dataDF.write.mode(SaveMode.Append).sqlDB(config )

Bulkinsert not working Azure SQL DW

Writing the dataframe to AZURE SQL DW is working fine for normal jdbc write.sqlDB API.
But while using bulkCopyToSqlDB api getting data conversion error. I'm not providing bulkcopy metadata.

19/08/05 17:06:00 INFO DAGScheduler: Job 3 failed: foreachPartition at DataFrameFunctions.scala:72, took 26.016681 s
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 5.0 failed 4 times, most recent failure: Lost task 0.3 in stage 5.0 (TID 11, wn3-hdi-gl.zeu1lfaty0eerhtaqktnvsba5f.ix.internal.cloudapp.net, executor 2): com.microsoft.sqlserver.jdbc.SQLServerException: An error occurred while converting the String value to JDBC data type DECIMAL.
at com.microsoft.sqlserver.jdbc.SQLServerBulkCopy.writeColumnToTdsWriter(SQLServerBulkCopy.java:2528)
at com.microsoft.sqlserver.jdbc.SQLServerBulkCopy.writeColumn(SQLServerBulkCopy.java:2766)
at com.microsoft.sqlserver.jdbc.SQLServerBulkCopy.writeBatchData(SQLServerBulkCopy.java:3274)
at com.microsoft.sqlserver.jdbc.SQLServerBulkCopy.doInsertBulk(SQLServerBulkCopy.java:1570)
at com.microsoft.sqlserver.jdbc.SQLServerBulkCopy.access$200(SQLServerBulkCopy.java:58)
at com.microsoft.sqlserver.jdbc.SQLServerBulkCopy$1InsertBulk.doExecute(SQLServerBulkCopy.java:709)
at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:7151)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:2478)
at com.microsoft.sqlserver.jdbc.SQLServerBulkCopy.sendBulkLoadBCP(SQLServerBulkCopy.java:739)
at com.microsoft.sqlserver.jdbc.SQLServerBulkCopy.writeToServer(SQLServerBulkCopy.java:1684)
at com.microsoft.sqlserver.jdbc.SQLServerBulkCopy.writeToServer(SQLServerBulkCopy.java:669)
at com.microsoft.azure.sqldb.spark.connect.DataFrameFunctions.com$microsoft$azure$sqldb$spark$connect$DataFrameFunctions$$bulkCopy(DataFrameFunctions.scala:127)
at com.microsoft.azure.sqldb.spark.connect.DataFrameFunctions$$anonfun$bulkCopyToSqlDB$1.apply(DataFrameFunctions.scala:72)
at com.microsoft.azure.sqldb.spark.connect.DataFrameFunctions$$anonfun$bulkCopyToSqlDB$1.apply(DataFrameFunctions.scala:72)
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:935)
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:935)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:121)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassCastException: java.lang.String cannot be cast to java.math.BigDecimal
at com.microsoft.sqlserver.jdbc.SQLServerBulkCopy.writeColumnToTdsWriter(SQLServerBulkCopy.java:2205)
... 25 more

Bulk Insert not working for Datetime datatype in ASDW

There appears to be an issue when attempting to bulk insert into an azure sql data warehouse table that has a datetime datatype. The originating datatype in databricks is timestamp, which should be compatible with datetime. However; I get this error every time:

SqlNativeBufferBufferBulkCopy.WriteTdsDataToServer, error in OdbcDone: SqlState: 42000, NativeError: 4816, 'Error calling: bcp_done(this->GetHdbc()) | SQL Error Info: SrvrMsgState: 1, SrvrSeverity: 16, Error <1>: ErrorMsg: [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Invalid column type from bcp client for colid 2. | Error calling: pConn->Done() | state: FFFF, number: 233925, active connections: 5', Connection String: Driver={pdwodbc17e};app=TypeC01-DmsNativeWriter:DB66\mpdwsvc (13056)-ODBC;trusted_connection=yes;autotranslate=no;server=\\.\pipe\DB.66-a313018f1e5b\sql\query;database=Distribution_9

When changing the target datatype from a datetime to a varchar, it bulk inserts just fine.

Intermediate Commit support - Bulk Insert mode

We are big consumer of Spark SQL connector as we are using this connector to read from various big data sources and dump into SQL Server 2016 datamart, One of the big issue faced was while using bulk-insert utility, we want to commit after each batch , so that transaction log is not filled up;
Can above be achieved? is there existing support or settings which can be leveraged for this requirement? please suggest.

Details :

We are using SQL Server 2016- on premise version + Spark 2.3
Referring to main article link : https://docs.microsoft.com/en-us/azure/sql-database/sql-database-spark-connector
Required --> Intermediate commit support after each batch to avoid filling of the transaction log - while Writing data to SQL Server using "Bulk Insert", default behavior is commit after complete data or all batches are written.

Data source com.databricks.spark.sqldw does not support streamed writing

In databricks, I am reading data from eventhub and trying to write Streaming data to SQL DW :

Df.writeStream
.format("com.databricks.spark.sqldw")
.option("url", s"")
.option("forwardSparkAzureStorageCredentials", "true")
.option("dbTable", "< table name>")
.option("checkpointLocation", "checkpoint_location")
.start()
but i am getting the below error:
java.lang.UnsupportedOperationException: Data source com.databricks.spark.sqldw does not support streamed writing

Can you please look into this issue?

Correct implicite datatype creation for datatype binary

Would be good to have implicite type conversion. For a Spark SQL table with a column datatype binary writing without explicit schema results in the following error:

com.microsoft.sqlserver.jdbc.SQLServerException: Column, parameter, or variable #14: Cannot find data type BLOB.

Correct implicite datatype creation would be good to have.

SQLServerException: Bulk load data was expected but not sent.

Hi,
We are trying to bulk copy a table from a parquet file backup (in azure blob storage). Also we are trying to copy the data without providing the BulkCopyMetadata.

This is how our code looks like:

import com.microsoft.azure.sqldb.spark.connect._

val bulkCopyConfig = Config(Map(
  "url"               -> "mysqlserver.database.windows.net",
  "databaseName"      -> "MyDatabase",
  "user"              -> "username",
  "password"          -> "*********",
  "databaseName"      -> "MyDatabase",
  "dbTable"           -> "dbo.Clients",
  "bulkCopyBatchSize" -> "2500",
  "bulkCopyTableLock" -> "true",
  "bulkCopyTimeout"   -> "600"
))


spark.conf.set("BLOB_SAS_TOKEN")
val data = spark.read.parquet("wasbs://[email protected]/back_up")


data.bulkCopyToSqlDB(bulkCopyConfig)

This is the exception which we are getting:



Strack Trace: 

at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:258)
	at com.microsoft.sqlserver.jdbc.TDSTokenHandler.onEOF(tdsparser.java:256)
	at com.microsoft.sqlserver.jdbc.TDSParser.parse(tdsparser.java:108)
	at com.microsoft.sqlserver.jdbc.TDSParser.parse(tdsparser.java:28)
	at com.microsoft.sqlserver.jdbc.SQLServerConnection$1ConnectionCommand.doExecute(SQLServerConnection.java:2519)
	at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:7151)
	at com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:2478)
	at com.microsoft.sqlserver.jdbc.SQLServerConnection.connectionCommand(SQLServerConnection.java:2524)
	at com.microsoft.sqlserver.jdbc.SQLServerConnection.rollback(SQLServerConnection.java:2704)
	at com.microsoft.azure.sqldb.spark.connect.DataFrameFunctions.com$microsoft$azure$sqldb$spark$connect$DataFrameFunctions$$bulkCopy(DataFrameFunctions.scala:142)
	at com.microsoft.azure.sqldb.spark.connect.DataFrameFunctions$$anonfun$bulkCopyToSqlDB$1.apply(DataFrameFunctions.scala:72)
	at com.microsoft.azure.sqldb.spark.connect.DataFrameFunctions$$anonfun$bulkCopyToSqlDB$1.apply(DataFrameFunctions.scala:72)
	at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:951)
	at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:951)
	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2281)
	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2281)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.doRunTask(Task.scala:139)
	at org.apache.spark.scheduler.Task.run(Task.scala:112)
	at org.apache.spark.executor.Executor$TaskRunner$$anonfun$13.apply(Executor.scala:497)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1481)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:503)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

Driver stacktrace:
	at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:2355)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:2343)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:2342)
	at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
	at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2342)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:1096)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:1096)
	at scala.Option.foreach(Option.scala:257)
	at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1096)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2574)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2522)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2510)
	at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
	at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:893)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2240)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2262)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2281)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2306)
	at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:951)
	at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:949)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
	at org.apache.spark.rdd.RDD.withScope(RDD.scala:379)
	at org.apache.spark.rdd.RDD.foreachPartition(RDD.scala:949)
	at org.apache.spark.sql.Dataset$$anonfun$foreachPartition$1.apply$mcV$sp(Dataset.scala:2748)
	at org.apache.spark.sql.Dataset$$anonfun$foreachPartition$1.apply(Dataset.scala:2748)
	at org.apache.spark.sql.Dataset$$anonfun$foreachPartition$1.apply(Dataset.scala:2748)
	at org.apache.spark.sql.Dataset$$anonfun$withNewRDDExecutionId$1.apply(Dataset.scala:3409)
	at org.apache.spark.sql.execution.SQLExecution$$anonfun$withCustomExecutionEnv$1.apply(SQLExecution.scala:99)
	at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:228)
	at org.apache.spark.sql.execution.SQLExecution$.withCustomExecutionEnv(SQLExecution.scala:85)
	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:158)
	at org.apache.spark.sql.Dataset.withNewRDDExecutionId(Dataset.scala:3405)
	at org.apache.spark.sql.Dataset.foreachPartition(Dataset.scala:2747)
	at com.microsoft.azure.sqldb.spark.connect.DataFrameFunctions.bulkCopyToSqlDB(DataFrameFunctions.scala:72)
	at linea46ca972368c43b6a90ec212d6d3685050.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw.<init>(command-883605742413301:19)
	at linea46ca972368c43b6a90ec212d6d3685050.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw.<init>(command-883605742413301:73)
	at linea46ca972368c43b6a90ec212d6d3685050.$read$$iw$$iw$$iw$$iw$$iw$$iw.<init>(command-883605742413301:75)
	at linea46ca972368c43b6a90ec212d6d3685050.$read$$iw$$iw$$iw$$iw$$iw.<init>(command-883605742413301:77)
	at linea46ca972368c43b6a90ec212d6d3685050.$read$$iw$$iw$$iw$$iw.<init>(command-883605742413301:79)
	at linea46ca972368c43b6a90ec212d6d3685050.$read$$iw$$iw$$iw.<init>(command-883605742413301:81)
	at linea46ca972368c43b6a90ec212d6d3685050.$read$$iw$$iw.<init>(command-883605742413301:83)
	at linea46ca972368c43b6a90ec212d6d3685050.$read$$iw.<init>(command-883605742413301:85)
	at linea46ca972368c43b6a90ec212d6d3685050.$read.<init>(command-883605742413301:87)
	at linea46ca972368c43b6a90ec212d6d3685050.$read$.<init>(command-883605742413301:91)
	at linea46ca972368c43b6a90ec212d6d3685050.$read$.<clinit>(command-883605742413301)
	at linea46ca972368c43b6a90ec212d6d3685050.$eval$.$print$lzycompute(<notebook>:7)
	at linea46ca972368c43b6a90ec212d6d3685050.$eval$.$print(<notebook>:6)
	at linea46ca972368c43b6a90ec212d6d3685050.$eval.$print(<notebook>)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:793)
	at scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:1054)
	at scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:645)
	at scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:644)
	at scala.reflect.internal.util.ScalaClassLoader$class.asContext(ScalaClassLoader.scala:31)
	at scala.reflect.internal.util.AbstractFileClassLoader.asContext(AbstractFileClassLoader.scala:19)
	at scala.tools.nsc.interpreter.IMain$WrappedRequest.loadAndRunReq(IMain.scala:644)
	at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:576)
	at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:572)
	at com.databricks.backend.daemon.driver.DriverILoop.execute(DriverILoop.scala:199)
	at com.databricks.backend.daemon.driver.ScalaDriverLocal$$anonfun$repl$1.apply$mcV$sp(ScalaDriverLocal.scala:190)
	at com.databricks.backend.daemon.driver.ScalaDriverLocal$$anonfun$repl$1.apply(ScalaDriverLocal.scala:190)
	at com.databricks.backend.daemon.driver.ScalaDriverLocal$$anonfun$repl$1.apply(ScalaDriverLocal.scala:190)
	at com.databricks.backend.daemon.driver.DriverLocal$TrapExitInternal$.trapExit(DriverLocal.scala:590)
	at com.databricks.backend.daemon.driver.DriverLocal$TrapExit$.apply(DriverLocal.scala:545)
	at com.databricks.backend.daemon.driver.ScalaDriverLocal.repl(ScalaDriverLocal.scala:190)
	at com.databricks.backend.daemon.driver.DriverLocal$$anonfun$execute$8.apply(DriverLocal.scala:323)
	at com.databricks.backend.daemon.driver.DriverLocal$$anonfun$execute$8.apply(DriverLocal.scala:303)
	at com.databricks.logging.UsageLogging$$anonfun$withAttributionContext$1.apply(UsageLogging.scala:235)
	at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
	at com.databricks.logging.UsageLogging$class.withAttributionContext(UsageLogging.scala:230)
	at com.databricks.backend.daemon.driver.DriverLocal.withAttributionContext(DriverLocal.scala:47)
	at com.databricks.logging.UsageLogging$class.withAttributionTags(UsageLogging.scala:268)
	at com.databricks.backend.daemon.driver.DriverLocal.withAttributionTags(DriverLocal.scala:47)
	at com.databricks.backend.daemon.driver.DriverLocal.execute(DriverLocal.scala:303)
	at com.databricks.backend.daemon.driver.DriverWrapper$$anonfun$tryExecutingCommand$2.apply(DriverWrapper.scala:591)
	at com.databricks.backend.daemon.driver.DriverWrapper$$anonfun$tryExecutingCommand$2.apply(DriverWrapper.scala:591)
	at scala.util.Try$.apply(Try.scala:192)
	at com.databricks.backend.daemon.driver.DriverWrapper.tryExecutingCommand(DriverWrapper.scala:586)
	at com.databricks.backend.daemon.driver.DriverWrapper.getCommandOutputAndError(DriverWrapper.scala:477)
	at com.databricks.backend.daemon.driver.DriverWrapper.executeCommand(DriverWrapper.scala:544)
	at com.databricks.backend.daemon.driver.DriverWrapper.runInnerLoop(DriverWrapper.scala:383)
	at com.databricks.backend.daemon.driver.DriverWrapper.runInner(DriverWrapper.scala:330)
	at com.databricks.backend.daemon.driver.DriverWrapper.run(DriverWrapper.scala:216)
	at java.lang.Thread.run(Thread.java:748)
Caused by: com.microsoft.sqlserver.jdbc.SQLServerException: Bulk load data was expected but not sent. The batch will be terminated.
	at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:258)
	at com.microsoft.sqlserver.jdbc.TDSTokenHandler.onEOF(tdsparser.java:256)
	at com.microsoft.sqlserver.jdbc.TDSParser.parse(tdsparser.java:108)
	at com.microsoft.sqlserver.jdbc.TDSParser.parse(tdsparser.java:28)
	at com.microsoft.sqlserver.jdbc.SQLServerConnection$1ConnectionCommand.doExecute(SQLServerConnection.java:2519)
	at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:7151)
	at com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:2478)
	at com.microsoft.sqlserver.jdbc.SQLServerConnection.connectionCommand(SQLServerConnection.java:2524)
	at com.microsoft.sqlserver.jdbc.SQLServerConnection.rollback(SQLServerConnection.java:2704)
	at com.microsoft.azure.sqldb.spark.connect.DataFrameFunctions.com$microsoft$azure$sqldb$spark$connect$DataFrameFunctions$$bulkCopy(DataFrameFunctions.scala:142)
	at com.microsoft.azure.sqldb.spark.connect.DataFrameFunctions$$anonfun$bulkCopyToSqlDB$1.apply(DataFrameFunctions.scala:72)
	at com.microsoft.azure.sqldb.spark.connect.DataFrameFunctions$$anonfun$bulkCopyToSqlDB$1.apply(DataFrameFunctions.scala:72)
	at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:951)
	at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:951)
	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2281)
	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2281)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.doRunTask(Task.scala:139)
	at org.apache.spark.scheduler.Task.run(Task.scala:112)
	at org.apache.spark.executor.Executor$TaskRunner$$anonfun$13.apply(Executor.scala:497)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1481)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:503)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

com.microsoft.sqlserver.jdbc.SQLServerException: Cannot find data type 'TEXT'.

I'm trying read data from csv and load to AZURE SQLDW db. But getting the below error while using save mode overwrite or when table not exists.

Exception in thread "main" com.microsoft.sqlserver.jdbc.SQLServerException: Cannot find data type 'TEXT'.
at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:259)
at com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServerStatement.java:1547)

Please help in resolving it.

Bulk copy stuck indefinitely after error

While doing a bulk copy to SQL Server, 1 task received the following error, due to some corrupted data on our end.

**/**/** **:**:** ERROR DataFrameFunctions: An error occurred while writing to database, attempting rollback
com.microsoft.sqlserver.jdbc.SQLServerException: The given value of type VARCHAR(2728461) from the data source cannot be converted to type varchar(255) of the specified target column.
	at com.microsoft.sqlserver.jdbc.SQLServerBulkCopy.validateStringBinaryLengths(SQLServerBulkCopy.java:1728)
	at com.microsoft.sqlserver.jdbc.SQLServerBulkCopy.writeColumn(SQLServerBulkCopy.java:2977)
	at com.microsoft.sqlserver.jdbc.SQLServerBulkCopy.writeBatchData(SQLServerBulkCopy.java:3561)
	at com.microsoft.sqlserver.jdbc.SQLServerBulkCopy.doInsertBulk(SQLServerBulkCopy.java:1574)
	at com.microsoft.sqlserver.jdbc.SQLServerBulkCopy.access$200(SQLServerBulkCopy.java:58)
	at com.microsoft.sqlserver.jdbc.SQLServerBulkCopy$1InsertBulk.doExecute(SQLServerBulkCopy.java:709)
	at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:7344)
	at com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:2713)
	at com.microsoft.sqlserver.jdbc.SQLServerBulkCopy.sendBulkLoadBCP(SQLServerBulkCopy.java:739)
	at com.microsoft.sqlserver.jdbc.SQLServerBulkCopy.writeToServer(SQLServerBulkCopy.java:1688)
	at com.microsoft.sqlserver.jdbc.SQLServerBulkCopy.writeToServer(SQLServerBulkCopy.java:669)
	at com.microsoft.azure.sqldb.spark.connect.DataFrameFunctions.com$microsoft$azure$sqldb$spark$connect$DataFrameFunctions$$bulkCopy(DataFrameFunctions.scala:127)
	at com.microsoft.azure.sqldb.spark.connect.DataFrameFunctions$$anonfun$bulkCopyToSqlDB$1.apply(DataFrameFunctions.scala:72)
	at com.microsoft.azure.sqldb.spark.connect.DataFrameFunctions$$anonfun$bulkCopyToSqlDB$1.apply(DataFrameFunctions.scala:72)
	at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:926)
	at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:926)
	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2069)
	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2069)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
	at org.apache.spark.scheduler.Task.run(Task.scala:108)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:338)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)

However the task does not fail, and keeps on running indefinitely. The lock on the table never gets released, not even after killing the application in yarn. I would expect this task to fail, perhaps retry a few times, eventually give up and cancel the spark job and release all locks.

Our config looks like this:

val config = Config(Map(
  "url"               -> "***",
  "databaseName"      -> "***",
  "dbTable"           -> "***",
  "user"              -> "***",
  "password"          -> "***",
  "driver"            -> "com.microsoft.sqlserver.jdbc.SQLServerDriver",
  "bulkCopyBatchSize" -> "200000",
  "bulkCopyTableLock" -> "true",
  "bulkCopyTimeout"   -> "600"
))

com.microsoft.sqlserver.jdbc.SQLServerException: Unable to retrieve data from the source.

I'm following the bulk upload sample and I'm getting the following error:

org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 45.0 failed 4 times, most recent failure: Lost task 0.3 in stage 45.0 (TID 511, 10.139.64.5, executor 0): com.microsoft.sqlserver.jdbc.SQLServerException: Unable to retrieve data from the source.

I'm trying to bulk insert a dataframe loaded from a Delta Lake. The dataframe has data and works just fine.

The weird issue is that I could got it working the first time I tried it, but then I can no longer get it to work....

Session stays open even after bulk insert compeletes

What I observe is spid opened for data insert stays alive for days after data load completes when using bulk load, as a result when someone tries running select it gets stuck with blk by spid of bulk insert job.
I needed can provide screenshots of same

code:

val bulkCopyConfig = Config(Map(
"url" -> "servername.eastus.cloudapp.azure.com",
"port" -> "1",
"databaseName" -> "PPPP",
"user" -> "vvvvv",
"password" -> "Password",
"databaseName" -> "P360",
"dbTable" -> tablename,
"bulkCopyBatchSize" -> "50000",
"bulkCopyTableLock" -> "true",
"bulkCopyTimeout" -> "600"
))

data.bulkCopyToSqlDB(bulkCopyConfig)

Does it support using the Windows domain account?

Our application is running on Linux OS, but remote SQL server is using Windows Domain account to access. (We have the domain username/password in the application)

We have to use net.sourceforge.jtds JDBC driver in Spark, which works fine, but performance is not as good as we want.

I gave azure-sqldb-spark a try, and below is the error I got in the Spark shell:

scala> spark.version
res1: String = 2.2.0
scala> val account = spark.read.sqlDB(config)
18/08/03 09:11:20 ERROR JniBasedUnixGroupsMapping: error looking up the name of group 239613470: No such file or directory
java.sql.SQLException: No suitable driver

at java.sql.DriverManager.getDriver(DriverManager.java:315)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions$$anonfun$7.apply(JDBCOptions.scala:84)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions$$anonfun$7.apply(JDBCOptions.scala:84)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions.(JDBCOptions.scala:83)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions.(JDBCOptions.scala:34)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:34)
at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:309)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:178)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:146)
at org.apache.spark.sql.DataFrameReader.jdbc(DataFrameReader.scala:193)
at com.microsoft.azure.sqldb.spark.connect.DataFrameReaderFunctions.sqlDB(DataFrameReaderFunctions.scala:44)
... 50 elided

java.sql.SQLException: No suitable driver

Hello,

I am unable to execute my code as Spark fails due to lack of a suitable driver. Is there any particular configuration I need to set in my Config()? I'm using SQL Server.

Am I supposed to set the driver class manually? And is there any particular format for the URL?

Thanks

Question: how to improve performance of bulkCopyToSqlDB

I have a data frame with around 5 million rows and looking to bulk upload to azure sql (in a pool - BusinessCritical: Gen5, 4 vCores where only this db exists).

I tried to increase the cluster but still not a big improvement, tried E8s_v3 with 8 worker nodes still it took an hour to only insert 300,000 records.

I am unsure whats missing. For testing I download the data from source db to a data frame and just bulk load the data frame to another azure sql db that has the exact same structure (created with the DACPAC of source db)

  • Tried different batch sizes from 100s to 500000
  • Tried increasing time out
  • Changed table lock to false assuming it allows parralelism

Below is a snippet of my code,
val queryConfig = Config(Map(
"url" -> TargetDatabase.ServerName,
"databaseName" -> TargetDatabase.DatabaseName,
"dbTable" -> target,
"user" -> TargetDatabase.ServerUser,
"password" -> TargetDatabase.ServerPassword,
"connectTimeout" -> "0",
"queryTimeout" -> "0"
))

val df = spark.read.sqlDB(queryConfig)

Error: value azurePushdownQuery is not a member of org.apache.spark.sql.SparkSession

I get this error when trying to run a custom query based on the example mentioned here:
https://github.com/Azure/azure-sqldb-spark#pushdown-query-to-azure-sql-database-or-sql-server

This is my code:

import com.microsoft.azure.sqldb.spark.config.Config
import com.microsoft.azure.sqldb.spark.query._

val createTableQueryString = s"""
  IF NOT EXISTS (SELECT * FROM sysobjects WHERE name = 'cohorts_analysis_manifest' and xtype ='U')
      USE testDb;
      CREATE TABLE testTbl
      (
          id bigint,
          name [varchar](255),
          ingestdate [varchar](10)
          CONSTRAINT [PK_testDb_testTbl] PRIMARY KEY CLUSTERED
          (
                    ingestdate DESC,
                    id ASC
          ) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON)
      )
"""
val createTableConfig = Config(Map(
  "url"            -> "myTestDbServer.database.windows.net",
  "databaseName"   -> "testDb",
  "user"           -> username,
  "password"       -> password,
  "queryCustom"    -> createTableQueryString
))

Allow Bulkcopy to create the table if it doesn't exist

Hi,

Would you consider enhancing the bulk copy so that it can create the table in SQL Server if it doesn't exist already? (Or would you consider merging a patch that did this?)

Thanks!

PS: Any eta on getting this into Maven?

This does not work for datetime columns

I am receiving the error below no matter what I do:

SqlNativeBufferBufferBulkCopy.WriteTdsDataToServer, error in OdbcDone: SqlState: 42000, NativeError: 4816, 'Error calling: bcp_done(this->GetHdbc()) | SQL Error Info: SrvrMsgState: 1, SrvrSeverity: 16, Error <1>: ErrorMsg: [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Invalid column type from bcp client for colid 8. | Error calling: pConn->Done() ...

Error "SQLServerException: The connection is closed" on writing Spark Dataframe to Azure SQL

Hi, I am using [com.microsoft.azure:azure-sqldb-spark:1.0.2] to write a Spark Dataframe (50K+ rows, 6 columns) to my Azure SQL database.

I am using following method: dataDF.write.mode(SaveMode.Append).sqlDB(config) with query Timeout set to a high value (6000s).

Any ideas of why it might be failing? Below is the stack trace.

Exception: org.apache.spark.SparkException: Job aborted due to stage failure: Task 242 in stage 84850.0 failed 4 times, most recent failure: Lost task 242.3 in stage 84850.0 (TID 12743887, 10.139.64.24, executor 344): com.microsoft.sqlserver.jdbc.SQLServerException: The connection is closed.
at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDriverError(SQLServerException.java:227)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.checkClosed(SQLServerConnection.java:796)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.rollback(SQLServerConnection.java:2698)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.savePartition(JdbcUtils.scala:713)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$saveTable$1.apply(JdbcUtils.scala:839)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$saveTable$1.apply(JdbcUtils.scala:839)
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:987)
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:987)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2321)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2321)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.doRunTask(Task.scala:140)
at org.apache.spark.scheduler.Task.run(Task.scala:113)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$13.apply(Executor.scala:533)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1541)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:539)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

com.microsoft.sqlserver.jdbc.SQLServerException: Unable to retrieve data from the source.

Running inside Databricks environment on Azure. The jar is loaded and recognized.

I have followed the example for bulk copy, just changing out the parameters. The target table has an IDENTITY column, but the source file does not.

The Dataframe has data, I have verified with display(...).

Everything runs except bulkCopyToSqlDB(...). I get the following exception:

org.apache.spark.SparkException: Job aborted due to stage failure: Task 3 in stage 14.0 failed 4 times, most recent failure: Lost task 3.3 in stage 14.0 (TID 308, 10.139.64.8, executor 1): com.microsoft.sqlserver.jdbc.SQLServerException: Unable to retrieve data from the source.
	at com.microsoft.sqlserver.jdbc.SQLServerBulkCopy.writeBatchData(SQLServerBulkCopy.java:3268)
	at com.microsoft.sqlserver.jdbc.SQLServerBulkCopy.doInsertBulk(SQLServerBulkCopy.java:1570)
	at com.microsoft.sqlserver.jdbc.SQLServerBulkCopy.access$200(SQLServerBulkCopy.java:58)
	at com.microsoft.sqlserver.jdbc.SQLServerBulkCopy$1InsertBulk.doExecute(SQLServerBulkCopy.java:709)
	at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:7151)
	at com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:2478)
	at com.microsoft.sqlserver.jdbc.SQLServerBulkCopy.sendBulkLoadBCP(SQLServerBulkCopy.java:739)
	at com.microsoft.sqlserver.jdbc.SQLServerBulkCopy.writeToServer(SQLServerBulkCopy.java:1684)
	at com.microsoft.sqlserver.jdbc.SQLServerBulkCopy.writeToServer(SQLServerBulkCopy.java:669)
	at com.microsoft.azure.sqldb.spark.connect.DataFrameFunctions.com$microsoft$azure$sqldb$spark$connect$DataFrameFunctions$$bulkCopy(DataFrameFunctions.scala:100)
	at com.microsoft.azure.sqldb.spark.connect.DataFrameFunctions$$anonfun$bulkCopyToSqlDB$1.apply(DataFrameFunctions.scala:47)
	at com.microsoft.azure.sqldb.spark.connect.DataFrameFunctions$$anonfun$bulkCopyToSqlDB$1.apply(DataFrameFunctions.scala:47)
	at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:941)
	at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:941)
	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2180)
	at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2180)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
	at org.apache.spark.scheduler.Task.run(Task.scala:112)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:384)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)

Exception while using bulk save

java.lang.NoSuchMethodError: com.microsoft.sqlserver.jdbc.SQLServerBulkCopyOptions.setAllowEncryptedValueModifications(Z)V +details
java.lang.NoSuchMethodError: com.microsoft.sqlserver.jdbc.SQLServerBulkCopyOptions.setAllowEncryptedValueModifications(Z)V
at com.microsoft.azure.sqldb.spark.bulk.BulkCopyUtils$.getBulkCopyOptions(BulkCopyUtils.scala:109)
at com.microsoft.azure.sqldb.spark.connect.DataFrameFunctions.com$microsoft$azure$sqldb$spark$connect$DataFrameFunctions$$bulkCopy(DataFrameFunctions.scala:99)
at

I am getting above exception while trying to bulk save data frame as described in the documentation.
I double checked SQL Server Driver jar is in claspath of SparkDriver and executors. I am using uber jar attached in this repo

Performance considerations

I am using connector with the following code snippet:

Writing to Azure SQL Database or SQL Server
import com.microsoft.azure.sqldb.spark.config.Config
import com.microsoft.azure.sqldb.spark.connect._
 
// Aquire a DataFrame collection (val collection)

val config = Config(Map(
  "url"          -> "mysqlserver.database.windows.net",
  "databaseName" -> "MyDatabase",
  "dbTable"      -> "dbo.Clients"
  "user"         -> "username",
  "password"     -> "*********"
))

import org.apache.spark.sql.SaveMode
collection.write.mode(SaveMode.Append).sqlDB(config)

Using exactly this code I get very slow performance. I am running an S0 Azure sql db and trying to insert a 3M record table and it takes serveral hours. Of course this db is only S0 but it should not take serveral hours to insert 3M rows.

The only thing I am doing before this snippet is read a spark table into the collection variable.

So now I am wondering what are the performance considerations of this connector?

Obviously I am new to databricks and spark but this performance level does not look good to me. So I might need some help in getting this performing better.

Question: How to KEEPIDENTITY in bulkinsert?

Is there a way to keepidentity from source data while bulk insert?
At the moment, using bulk insert SQL server assigns the identity values rather than from the source data but I would like to insert the source data values, is this possible?

My source data identity values are like 1,2,3...150,151,2000

No Such Method Error => setAllowEncryptedValueModifications

I am trying to bulk copy from Spark to the database.
But I gets an error at Azure HDInsight.
Please, Help!

Using:
azure-sqldb-spark-1.0.0.jar
or azure-sqldb-spark-1.0.0-uber.jar
or azure-sqldb-spark-1.0.1-jar-with-dependencies.jar

Env:
Azure HDInsight: Spark 2.1 on Linux (HDI 3.6)
SQL Server: SQL Server 2016 on VM

Error:
Error: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 4 times, most recent failure: Lost task 0.3 in stage 1.0 (TID 4, wn4-y17hsm.npb0fipud0iuhixtnxb1j2miud.ax.internal.cloudapp.net, executor 1): java.lang.NoSuchMethodError: com.microsoft.sqlserver.jdbc.SQLServerBulkCopyOptions.setAllowEncryptedValueModifications(Z)V
at com.microsoft.azure.sqldb.spark.bulk.BulkCopyUtils$.getBulkCopyOptions(BulkCopyUtils.scala:109)
at com.microsoft.azure.sqldb.spark.connect.DataFrameFunctions.com$microsoft$azure$sqldb$spark$connect$DataFrameFunctions$$bulkCopy(DataFrameFunctions.scala:121)
at com.microsoft.azure.sqldb.spark.connect.DataFrameFunctions$$anonfun$bulkCopyToSqlDB$1.apply(DataFrameFunctions.scala:67)
at com.microsoft.azure.sqldb.spark.connect.DataFrameFunctions$$anonfun$bulkCopyToSqlDB$1.apply(DataFrameFunctions.scala:67)
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:926)
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:926)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1954)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1954)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:99)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

expected but string literal found

import com.microsoft.azure.sqldb.spark.config.Config
import com.microsoft.azure.sqldb.spark.connect._

val config = Config(Map(
"url" -> "10.12.14.15",
"databaseName" -> "AdventureWorks",
"dbTable" -> "dbo.Employee"
"user" -> "nnn",
"password" -> "nn",
"connectTimeout" -> "5", //seconds
"queryTimeout" -> "5" //seconds
))
val collection = sqlContext.read.sqlDB(config)
collection.show()

Got Below error:
:12: error: ')' expected but string literal found.
"user" -> "nnn",
^

image

Hosting in Maven Central

At the moment it is difficult to use this project as a dependency. Is there a time-frame when this library will be present in maven?

value sqlDb is not a member of org.apache.spark.sql.DataFrameReader

When I tried to run the below code, I got the error: value sqlDb is not a member of org.apache.spark.sql.DataFrameReader.

import com.microsoft.azure.sqldb.spark.config.Config`
import com.microsoft.azure.sqldb.spark.connect._`
val config = Config(Map(
  "url"          -> "***",
  "databaseName" -> "***",
  "queryCustom"  -> "SELECT * FROM dbo.Table_1", //Sql query
  "user"         -> "***",
  "password"     -> "***"
))

//Read all data in table dbo.Table_1
val collection = sqlContext.read.sqlDb(config)
collection.show()

In fact, when I typed sqlContext.read. and hit tab to show the available options, there's none for "sqlDB" or "sqlDb". I'm using azure-sqldb-spark-1.0.2-jar-with-dependencies.jar from this release: https://github.com/Azure/azure-sqldb-spark/releases/tag/1.0.2

Dynamic Columns in BulkCopyMetaData

In scenario, I have 100+ columns so in such case, its very tough for me to define all the columns manualy in meta data like this. Please let me know if there is any way I can read columns from the table and created bulkCopyMetaData

var bulkCopyMetadata = new BulkCopyMetadata
bulkCopyMetadata.addColumnMetadata(1, "Title", java.sql.Types.NVARCHAR, 128, 0)
bulkCopyMetadata.addColumnMetadata(2, "FirstName", java.sql.Types.NVARCHAR, 50, 0)
bulkCopyMetadata.addColumnMetadata(3, "LastName", java.sql.Types.NVARCHAR, 50, 0)

Object sqldb is not a member of package com.microsoft.azure

When I'm trying to run examples, I'm getting the following errors:

notebook:2: error: object sqldb is not a member of package com.microsoft.azure
import com.microsoft.azure.sqldb.spark.connect._
                           ^
notebook:1: error: object sqldb is not a member of package com.microsoft.azure
import com.microsoft.azure.sqldb.spark.config.Config
                           ^
notebook:4: error: not found: value Config
val config = Config(Map(
             ^
notebook:14: error: value sqlDB is not a member of org.apache.spark.sql.DataFrameReader
val collection = sqlContext.read.sqlDB(config)

I'm using azure-sqldb-spark-1.0.1-jar-with-dependencies.jar.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.