Spark 2.3.1 insertinto table (s3) with parititions runs many queries before the actual write
I have a very simple Spark job that writes to S3.
The table has 3 different partition keys and many values (some of them is getting bigger every hour).
I am using the following code:
dataframe.select(reorderFields:_*).write.mode(SaveMode.Overwrite).insertInto(tableName)
At the beginning this code was pretty efficient. But after the table got bigger, it became slower and slower.
When opened a debug log I found many reads to hive before it even starts the calculation of the dataframe.
LOGS:
2019-01-03 16:50:58 [main] DataNucleus.Datastore:58 [DEBUG]: Closing PreparedStatement "com.jolbox.bonecp.PreparedStatementHandle@5470ec7e"
2019-01-03 16:50:58 [main] DataNucleus.Datastore.Native:58 [DEBUG]: SELECT `A0`.`COLUMN_NAME`,`A0`.`ORDER`,`A0`.`INTEGER_IDX` AS NUCORDER0 FROM `SORT_COLS` `A0` WHERE `A0`.`SD_ID` = <297323> AND `A0`.`INTEGER_IDX` >= 0 ORDER BY NUCORDER0
2019-01-03 16:50:58 [main] DataNucleus.Datastore.Retrieve:58 [DEBUG]: Execution Time = 1 ms
2019-01-03 16:50:58 [main] DataNucleus.Datastore:58 [DEBUG]: Closing PreparedStatement "org.datanucleus.store.rdbms.ParamLoggingPreparedStatement@325b1c61"
2019-01-03 16:50:58 [main] DataNucleus.Persistence:58 [DEBUG]: Object "org.apache.hadoop.hive.metastore.model.MStorageDescriptor@6328ec75" field "parameters" is replaced by a SCO wrapper of type "org.datanucleus.store.types.backed.Map" [cache-values=true, lazy-loading=true, queued-operations=false, allow-nulls=true]
2019-01-03 16:50:58 [main] DataNucleus.Persistence:58 [DEBUG]: Object "org.apache.hadoop.hive.metastore.model.MStorageDescriptor@6328ec75" field "parameters" loading contents to SCO wrapper from the datastore
2019-01-03 16:50:58 [main] DataNucleus.Connection:58 [DEBUG]: Connection found in the pool : org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl@236ec794 [conn=com.jolbox.bonecp.ConnectionHandle@712106b5, commitOnRelease=false, closeOnRelease=false, closeOnTxnEnd=true] for key=org.datanucleus.ExecutionContextThreadedImpl@132e3594 in factory=ConnectionFactory:tx[org.datanucleus.store.rdbms.ConnectionFactoryImpl@72c9ebfa]
2019-01-03 16:50:58 [main] DataNucleus.Datastore:58 [DEBUG]: Closing PreparedStatement "com.jolbox.bonecp.PreparedStatementHandle@250ebae4"
2019-01-03 16:50:58 [main] DataNucleus.Datastore.Native:58 [DEBUG]: SELECT `A0`.`PARAM_KEY`,`A0`.`PARAM_VALUE` FROM `SD_PARAMS` `A0` WHERE `A0`.`SD_ID` = <297323> AND `A0`.`PARAM_KEY` IS NOT NULL
2019-01-03 16:50:58 [main] DataNucleus.Datastore.Retrieve:58 [DEBUG]: Execution Time = 1 ms
2019-01-03 16:50:58 [main] DataNucleus.Datastore:58 [DEBUG]: Closing PreparedStatement "org.datanucleus.store.rdbms.ParamLoggingPreparedStatement@798a320"
2019-01-03 16:50:58 [main] DataNucleus.Persistence:58 [DEBUG]: Object "org.apache.hadoop.hive.metastore.model.MStorageDescriptor@6328ec75" field "skewedColNames" is replaced by a SCO wrapper of type "org.datanucleus.store.types.backed.List" [cache-values=true, lazy-loading=true, queued-operations=false, allow-nulls=true]
2019-01-03 16:50:58 [main] DataNucleus.Persistence:58 [DEBUG]: Object "org.apache.hadoop.hive.metastore.model.MStorageDescriptor@6328ec75" field "skewedColNames" loading contents to SCO wrapper from the datastore
2019-01-03 16:50:58 [main] DataNucleus.Connection:58 [DEBUG]: Connection found in the pool : org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl@236ec794 [conn=com.jolbox.bonecp.ConnectionHandle@712106b5, commitOnRelease=false, closeOnRelease=false, closeOnTxnEnd=true] for key=org.datanucleus.ExecutionContextThreadedImpl@132e3594 in factory=ConnectionFactory:tx[org.datanucleus.store.rdbms.ConnectionFactoryImpl@72c9ebfa]
2019-01-03 16:50:58 [main] DataNucleus.Datastore:58 [DEBUG]: Closing PreparedStatement "com.jolbox.bonecp.PreparedStatementHandle@540637b0"
I tried to reconfigure my hive with the following parameters:
sparkConf.set("hive.auto.convert.join.noconditionaltask.size","200M")
sparkConf.set("hive.auto.convert.join.noconditionaltask","true")
sparkConf.set("hive.optimize.sort.dynamic.partition","false")
sparkConf.set("spark.sql.hive.convertMetastoreParquet.mergeSchema","false")
sparkConf.set("parquet.enable.summary-metadata","false")
Also added to hive.xml
<property>
<name>hive.stats.autogather</name>
<value>false</value>
</property>
But it still acts the same.
I am not working with HDFS.
I will appreciate any suggestion ??
scala apache-spark hive
add a comment |
I have a very simple Spark job that writes to S3.
The table has 3 different partition keys and many values (some of them is getting bigger every hour).
I am using the following code:
dataframe.select(reorderFields:_*).write.mode(SaveMode.Overwrite).insertInto(tableName)
At the beginning this code was pretty efficient. But after the table got bigger, it became slower and slower.
When opened a debug log I found many reads to hive before it even starts the calculation of the dataframe.
LOGS:
2019-01-03 16:50:58 [main] DataNucleus.Datastore:58 [DEBUG]: Closing PreparedStatement "com.jolbox.bonecp.PreparedStatementHandle@5470ec7e"
2019-01-03 16:50:58 [main] DataNucleus.Datastore.Native:58 [DEBUG]: SELECT `A0`.`COLUMN_NAME`,`A0`.`ORDER`,`A0`.`INTEGER_IDX` AS NUCORDER0 FROM `SORT_COLS` `A0` WHERE `A0`.`SD_ID` = <297323> AND `A0`.`INTEGER_IDX` >= 0 ORDER BY NUCORDER0
2019-01-03 16:50:58 [main] DataNucleus.Datastore.Retrieve:58 [DEBUG]: Execution Time = 1 ms
2019-01-03 16:50:58 [main] DataNucleus.Datastore:58 [DEBUG]: Closing PreparedStatement "org.datanucleus.store.rdbms.ParamLoggingPreparedStatement@325b1c61"
2019-01-03 16:50:58 [main] DataNucleus.Persistence:58 [DEBUG]: Object "org.apache.hadoop.hive.metastore.model.MStorageDescriptor@6328ec75" field "parameters" is replaced by a SCO wrapper of type "org.datanucleus.store.types.backed.Map" [cache-values=true, lazy-loading=true, queued-operations=false, allow-nulls=true]
2019-01-03 16:50:58 [main] DataNucleus.Persistence:58 [DEBUG]: Object "org.apache.hadoop.hive.metastore.model.MStorageDescriptor@6328ec75" field "parameters" loading contents to SCO wrapper from the datastore
2019-01-03 16:50:58 [main] DataNucleus.Connection:58 [DEBUG]: Connection found in the pool : org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl@236ec794 [conn=com.jolbox.bonecp.ConnectionHandle@712106b5, commitOnRelease=false, closeOnRelease=false, closeOnTxnEnd=true] for key=org.datanucleus.ExecutionContextThreadedImpl@132e3594 in factory=ConnectionFactory:tx[org.datanucleus.store.rdbms.ConnectionFactoryImpl@72c9ebfa]
2019-01-03 16:50:58 [main] DataNucleus.Datastore:58 [DEBUG]: Closing PreparedStatement "com.jolbox.bonecp.PreparedStatementHandle@250ebae4"
2019-01-03 16:50:58 [main] DataNucleus.Datastore.Native:58 [DEBUG]: SELECT `A0`.`PARAM_KEY`,`A0`.`PARAM_VALUE` FROM `SD_PARAMS` `A0` WHERE `A0`.`SD_ID` = <297323> AND `A0`.`PARAM_KEY` IS NOT NULL
2019-01-03 16:50:58 [main] DataNucleus.Datastore.Retrieve:58 [DEBUG]: Execution Time = 1 ms
2019-01-03 16:50:58 [main] DataNucleus.Datastore:58 [DEBUG]: Closing PreparedStatement "org.datanucleus.store.rdbms.ParamLoggingPreparedStatement@798a320"
2019-01-03 16:50:58 [main] DataNucleus.Persistence:58 [DEBUG]: Object "org.apache.hadoop.hive.metastore.model.MStorageDescriptor@6328ec75" field "skewedColNames" is replaced by a SCO wrapper of type "org.datanucleus.store.types.backed.List" [cache-values=true, lazy-loading=true, queued-operations=false, allow-nulls=true]
2019-01-03 16:50:58 [main] DataNucleus.Persistence:58 [DEBUG]: Object "org.apache.hadoop.hive.metastore.model.MStorageDescriptor@6328ec75" field "skewedColNames" loading contents to SCO wrapper from the datastore
2019-01-03 16:50:58 [main] DataNucleus.Connection:58 [DEBUG]: Connection found in the pool : org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl@236ec794 [conn=com.jolbox.bonecp.ConnectionHandle@712106b5, commitOnRelease=false, closeOnRelease=false, closeOnTxnEnd=true] for key=org.datanucleus.ExecutionContextThreadedImpl@132e3594 in factory=ConnectionFactory:tx[org.datanucleus.store.rdbms.ConnectionFactoryImpl@72c9ebfa]
2019-01-03 16:50:58 [main] DataNucleus.Datastore:58 [DEBUG]: Closing PreparedStatement "com.jolbox.bonecp.PreparedStatementHandle@540637b0"
I tried to reconfigure my hive with the following parameters:
sparkConf.set("hive.auto.convert.join.noconditionaltask.size","200M")
sparkConf.set("hive.auto.convert.join.noconditionaltask","true")
sparkConf.set("hive.optimize.sort.dynamic.partition","false")
sparkConf.set("spark.sql.hive.convertMetastoreParquet.mergeSchema","false")
sparkConf.set("parquet.enable.summary-metadata","false")
Also added to hive.xml
<property>
<name>hive.stats.autogather</name>
<value>false</value>
</property>
But it still acts the same.
I am not working with HDFS.
I will appreciate any suggestion ??
scala apache-spark hive
Problem is with Read or S3 Write?
– Kaushal
Jan 3 at 18:42
@ Kaushal With write using insertInto
– Ehud Lev
Jan 3 at 18:43
add a comment |
I have a very simple Spark job that writes to S3.
The table has 3 different partition keys and many values (some of them is getting bigger every hour).
I am using the following code:
dataframe.select(reorderFields:_*).write.mode(SaveMode.Overwrite).insertInto(tableName)
At the beginning this code was pretty efficient. But after the table got bigger, it became slower and slower.
When opened a debug log I found many reads to hive before it even starts the calculation of the dataframe.
LOGS:
2019-01-03 16:50:58 [main] DataNucleus.Datastore:58 [DEBUG]: Closing PreparedStatement "com.jolbox.bonecp.PreparedStatementHandle@5470ec7e"
2019-01-03 16:50:58 [main] DataNucleus.Datastore.Native:58 [DEBUG]: SELECT `A0`.`COLUMN_NAME`,`A0`.`ORDER`,`A0`.`INTEGER_IDX` AS NUCORDER0 FROM `SORT_COLS` `A0` WHERE `A0`.`SD_ID` = <297323> AND `A0`.`INTEGER_IDX` >= 0 ORDER BY NUCORDER0
2019-01-03 16:50:58 [main] DataNucleus.Datastore.Retrieve:58 [DEBUG]: Execution Time = 1 ms
2019-01-03 16:50:58 [main] DataNucleus.Datastore:58 [DEBUG]: Closing PreparedStatement "org.datanucleus.store.rdbms.ParamLoggingPreparedStatement@325b1c61"
2019-01-03 16:50:58 [main] DataNucleus.Persistence:58 [DEBUG]: Object "org.apache.hadoop.hive.metastore.model.MStorageDescriptor@6328ec75" field "parameters" is replaced by a SCO wrapper of type "org.datanucleus.store.types.backed.Map" [cache-values=true, lazy-loading=true, queued-operations=false, allow-nulls=true]
2019-01-03 16:50:58 [main] DataNucleus.Persistence:58 [DEBUG]: Object "org.apache.hadoop.hive.metastore.model.MStorageDescriptor@6328ec75" field "parameters" loading contents to SCO wrapper from the datastore
2019-01-03 16:50:58 [main] DataNucleus.Connection:58 [DEBUG]: Connection found in the pool : org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl@236ec794 [conn=com.jolbox.bonecp.ConnectionHandle@712106b5, commitOnRelease=false, closeOnRelease=false, closeOnTxnEnd=true] for key=org.datanucleus.ExecutionContextThreadedImpl@132e3594 in factory=ConnectionFactory:tx[org.datanucleus.store.rdbms.ConnectionFactoryImpl@72c9ebfa]
2019-01-03 16:50:58 [main] DataNucleus.Datastore:58 [DEBUG]: Closing PreparedStatement "com.jolbox.bonecp.PreparedStatementHandle@250ebae4"
2019-01-03 16:50:58 [main] DataNucleus.Datastore.Native:58 [DEBUG]: SELECT `A0`.`PARAM_KEY`,`A0`.`PARAM_VALUE` FROM `SD_PARAMS` `A0` WHERE `A0`.`SD_ID` = <297323> AND `A0`.`PARAM_KEY` IS NOT NULL
2019-01-03 16:50:58 [main] DataNucleus.Datastore.Retrieve:58 [DEBUG]: Execution Time = 1 ms
2019-01-03 16:50:58 [main] DataNucleus.Datastore:58 [DEBUG]: Closing PreparedStatement "org.datanucleus.store.rdbms.ParamLoggingPreparedStatement@798a320"
2019-01-03 16:50:58 [main] DataNucleus.Persistence:58 [DEBUG]: Object "org.apache.hadoop.hive.metastore.model.MStorageDescriptor@6328ec75" field "skewedColNames" is replaced by a SCO wrapper of type "org.datanucleus.store.types.backed.List" [cache-values=true, lazy-loading=true, queued-operations=false, allow-nulls=true]
2019-01-03 16:50:58 [main] DataNucleus.Persistence:58 [DEBUG]: Object "org.apache.hadoop.hive.metastore.model.MStorageDescriptor@6328ec75" field "skewedColNames" loading contents to SCO wrapper from the datastore
2019-01-03 16:50:58 [main] DataNucleus.Connection:58 [DEBUG]: Connection found in the pool : org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl@236ec794 [conn=com.jolbox.bonecp.ConnectionHandle@712106b5, commitOnRelease=false, closeOnRelease=false, closeOnTxnEnd=true] for key=org.datanucleus.ExecutionContextThreadedImpl@132e3594 in factory=ConnectionFactory:tx[org.datanucleus.store.rdbms.ConnectionFactoryImpl@72c9ebfa]
2019-01-03 16:50:58 [main] DataNucleus.Datastore:58 [DEBUG]: Closing PreparedStatement "com.jolbox.bonecp.PreparedStatementHandle@540637b0"
I tried to reconfigure my hive with the following parameters:
sparkConf.set("hive.auto.convert.join.noconditionaltask.size","200M")
sparkConf.set("hive.auto.convert.join.noconditionaltask","true")
sparkConf.set("hive.optimize.sort.dynamic.partition","false")
sparkConf.set("spark.sql.hive.convertMetastoreParquet.mergeSchema","false")
sparkConf.set("parquet.enable.summary-metadata","false")
Also added to hive.xml
<property>
<name>hive.stats.autogather</name>
<value>false</value>
</property>
But it still acts the same.
I am not working with HDFS.
I will appreciate any suggestion ??
scala apache-spark hive
I have a very simple Spark job that writes to S3.
The table has 3 different partition keys and many values (some of them is getting bigger every hour).
I am using the following code:
dataframe.select(reorderFields:_*).write.mode(SaveMode.Overwrite).insertInto(tableName)
At the beginning this code was pretty efficient. But after the table got bigger, it became slower and slower.
When opened a debug log I found many reads to hive before it even starts the calculation of the dataframe.
LOGS:
2019-01-03 16:50:58 [main] DataNucleus.Datastore:58 [DEBUG]: Closing PreparedStatement "com.jolbox.bonecp.PreparedStatementHandle@5470ec7e"
2019-01-03 16:50:58 [main] DataNucleus.Datastore.Native:58 [DEBUG]: SELECT `A0`.`COLUMN_NAME`,`A0`.`ORDER`,`A0`.`INTEGER_IDX` AS NUCORDER0 FROM `SORT_COLS` `A0` WHERE `A0`.`SD_ID` = <297323> AND `A0`.`INTEGER_IDX` >= 0 ORDER BY NUCORDER0
2019-01-03 16:50:58 [main] DataNucleus.Datastore.Retrieve:58 [DEBUG]: Execution Time = 1 ms
2019-01-03 16:50:58 [main] DataNucleus.Datastore:58 [DEBUG]: Closing PreparedStatement "org.datanucleus.store.rdbms.ParamLoggingPreparedStatement@325b1c61"
2019-01-03 16:50:58 [main] DataNucleus.Persistence:58 [DEBUG]: Object "org.apache.hadoop.hive.metastore.model.MStorageDescriptor@6328ec75" field "parameters" is replaced by a SCO wrapper of type "org.datanucleus.store.types.backed.Map" [cache-values=true, lazy-loading=true, queued-operations=false, allow-nulls=true]
2019-01-03 16:50:58 [main] DataNucleus.Persistence:58 [DEBUG]: Object "org.apache.hadoop.hive.metastore.model.MStorageDescriptor@6328ec75" field "parameters" loading contents to SCO wrapper from the datastore
2019-01-03 16:50:58 [main] DataNucleus.Connection:58 [DEBUG]: Connection found in the pool : org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl@236ec794 [conn=com.jolbox.bonecp.ConnectionHandle@712106b5, commitOnRelease=false, closeOnRelease=false, closeOnTxnEnd=true] for key=org.datanucleus.ExecutionContextThreadedImpl@132e3594 in factory=ConnectionFactory:tx[org.datanucleus.store.rdbms.ConnectionFactoryImpl@72c9ebfa]
2019-01-03 16:50:58 [main] DataNucleus.Datastore:58 [DEBUG]: Closing PreparedStatement "com.jolbox.bonecp.PreparedStatementHandle@250ebae4"
2019-01-03 16:50:58 [main] DataNucleus.Datastore.Native:58 [DEBUG]: SELECT `A0`.`PARAM_KEY`,`A0`.`PARAM_VALUE` FROM `SD_PARAMS` `A0` WHERE `A0`.`SD_ID` = <297323> AND `A0`.`PARAM_KEY` IS NOT NULL
2019-01-03 16:50:58 [main] DataNucleus.Datastore.Retrieve:58 [DEBUG]: Execution Time = 1 ms
2019-01-03 16:50:58 [main] DataNucleus.Datastore:58 [DEBUG]: Closing PreparedStatement "org.datanucleus.store.rdbms.ParamLoggingPreparedStatement@798a320"
2019-01-03 16:50:58 [main] DataNucleus.Persistence:58 [DEBUG]: Object "org.apache.hadoop.hive.metastore.model.MStorageDescriptor@6328ec75" field "skewedColNames" is replaced by a SCO wrapper of type "org.datanucleus.store.types.backed.List" [cache-values=true, lazy-loading=true, queued-operations=false, allow-nulls=true]
2019-01-03 16:50:58 [main] DataNucleus.Persistence:58 [DEBUG]: Object "org.apache.hadoop.hive.metastore.model.MStorageDescriptor@6328ec75" field "skewedColNames" loading contents to SCO wrapper from the datastore
2019-01-03 16:50:58 [main] DataNucleus.Connection:58 [DEBUG]: Connection found in the pool : org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl@236ec794 [conn=com.jolbox.bonecp.ConnectionHandle@712106b5, commitOnRelease=false, closeOnRelease=false, closeOnTxnEnd=true] for key=org.datanucleus.ExecutionContextThreadedImpl@132e3594 in factory=ConnectionFactory:tx[org.datanucleus.store.rdbms.ConnectionFactoryImpl@72c9ebfa]
2019-01-03 16:50:58 [main] DataNucleus.Datastore:58 [DEBUG]: Closing PreparedStatement "com.jolbox.bonecp.PreparedStatementHandle@540637b0"
I tried to reconfigure my hive with the following parameters:
sparkConf.set("hive.auto.convert.join.noconditionaltask.size","200M")
sparkConf.set("hive.auto.convert.join.noconditionaltask","true")
sparkConf.set("hive.optimize.sort.dynamic.partition","false")
sparkConf.set("spark.sql.hive.convertMetastoreParquet.mergeSchema","false")
sparkConf.set("parquet.enable.summary-metadata","false")
Also added to hive.xml
<property>
<name>hive.stats.autogather</name>
<value>false</value>
</property>
But it still acts the same.
I am not working with HDFS.
I will appreciate any suggestion ??
scala apache-spark hive
scala apache-spark hive
edited Jan 3 at 18:02
thebluephantom
3,23231033
3,23231033
asked Jan 3 at 16:55
Ehud LevEhud Lev
8381017
8381017
Problem is with Read or S3 Write?
– Kaushal
Jan 3 at 18:42
@ Kaushal With write using insertInto
– Ehud Lev
Jan 3 at 18:43
add a comment |
Problem is with Read or S3 Write?
– Kaushal
Jan 3 at 18:42
@ Kaushal With write using insertInto
– Ehud Lev
Jan 3 at 18:43
Problem is with Read or S3 Write?
– Kaushal
Jan 3 at 18:42
Problem is with Read or S3 Write?
– Kaushal
Jan 3 at 18:42
@ Kaushal With write using insertInto
– Ehud Lev
Jan 3 at 18:43
@ Kaushal With write using insertInto
– Ehud Lev
Jan 3 at 18:43
add a comment |
0
active
oldest
votes
StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f54026571%2fspark-2-3-1-insertinto-table-s3-with-parititions-runs-many-queries-before-the%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
0
active
oldest
votes
0
active
oldest
votes
active
oldest
votes
active
oldest
votes
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f54026571%2fspark-2-3-1-insertinto-table-s3-with-parititions-runs-many-queries-before-the%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Problem is with Read or S3 Write?
– Kaushal
Jan 3 at 18:42
@ Kaushal With write using insertInto
– Ehud Lev
Jan 3 at 18:43