pyspark Py4J error using canopy :PythonAccumulatorV2([class java.lang.String, class java.lang.Integer, class...












0














I installed canopy IDE on windows as well as python and pyspark. When executing the code of a program, there was problem of a sparK Context:



findspark.init()
conf = SparkConf().setMaster('local').setAppName('MonEssai')
sc = SparkContext.getOrCreate();
lines = sc.textFile("file:///PremiéreEssai/ file9.txt")
fun = lines.flatMap(listsGraph)
results =fun.collect()

for result1 in results:
if(result1):
if ((result1[0].strip().startswith("sub_"))|(result1[0].strip().startswith("start"))):
for k in range(0,len(result1)):
if result1[k] not in Loc:
Loc.append(result1[k])
else :
for j in range(0,len(result1)):
if result1[j] not in Ext:
Ext.append(result1[j])

result3 = sc.parallelize(Ext)
ExtSimilarity= result3.map(MatchExt).filter(lambda x: x != None).collect()
#print(ExtSimilarity)

#print(Loc)
result3 = sc.parallelize(Loc)
result9= result3.map(pos_debut)
result11= result9.map(opcode)
VectOpcode= result11.flatMapValues(f).flatMap(lambda X: [((X[0],len(X[1])))]).groupByKey().mapValues(list)
VectOpcode2 = VectOpcode.collect()


And I got the following error:




Py4JError: An error occurred while calling
None.org.apache.spark.api.python.PythonAccumulatorV2. Trace:
py4j.Py4JException: Constructor
org.apache.spark.api.python.PythonAccumulatorV2([class
java.lang.String, class java.lang.Integer, class java.lang.String])
does not exist




Py4JErrorTraceback (most recent call last)
C:Premi�reEssaimaman.py in <module>()
818 findspark.init()
819 conf = SparkConf().setMaster('local').setAppName('MonEssai')
--> 820 sc = SparkContext.getOrCreate();
821 lines = sc.textFile("file:///PremiéreEssai/ file9.txt")
822 fun = lines.flatMap(listsGraph)
C:UsersheneAppDataLocalEnthoughtCanopyedmenvsUserlibsite-packagespysparkcontext.pyc in getOrCreate(cls, conf)
347 with SparkContext._lock:
348 if SparkContext._active_spark_context is None:
--> 349 SparkContext(conf=conf or SparkConf())
350 return SparkContext._active_spark_context
351
C:UsersheneAppDataLocalEnthoughtCanopyedmenvsUserlibsite-packagespysparkcontext.pyc in __init__(self, master, appName, sparkHome, pyFiles, environment, batchSize, serializer, conf, gateway, jsc, profiler_cls)
116 try:
117 self._do_init(master, appName, sparkHome, pyFiles, environment, batchSize, serializer,
--> 118 conf, jsc, profiler_cls)
119 except:
120 # If an error occurs, clean up in order to allow future SparkContext creation:
C:UsersheneAppDataLocalEnthoughtCanopyedmenvsUserlibsite-packagespysparkcontext.pyc in _do_init(self, master, appName, sparkHome, pyFiles, environment, batchSize, serializer, conf, jsc, profiler_cls)
187 self._accumulatorServer = accumulators._start_update_server(auth_token)
188 (host, port) = self._accumulatorServer.server_address
--> 189 self._javaAccumulator = self._jvm.PythonAccumulatorV2(host, port, auth_token)
190 self._jsc.sc().register(self._javaAccumulator)
191
C:UsersheneAppDataLocalEnthoughtCanopyedmenvsUserlibsite-packagespy4jjava_gateway.pyc in __call__(self, *args)
1523 answer = self._gateway_client.send_command(command)
1524 return_value = get_return_value(
-> 1525 answer, self._gateway_client, None, self._fqn)
1526
1527 for temp_arg in temp_args:
C:UsersheneAppDataLocalEnthoughtCanopyedmenvsUserlibsite-packagespy4jprotocol.pyc in get_return_value(answer, gateway_client, target_id, name)
330 raise Py4JError(
331 "An error occurred while calling {0}{1}{2}. Trace:n{3}n".
--> 332 format(target_id, ".", name, value))
333 else:
334 raise Py4JError(
Py4JError: An error occurred while calling None.org.apache.spark.api.python.PythonAccumulatorV2. Trace:
py4j.Py4JException: Constructor org.apache.spark.api.python.PythonAccumulatorV2([class java.lang.String, class java.lang.Integer, class java.lang.String]) does not exist
at py4j.reflection.ReflectionEngine.getConstructor(ReflectionEngine.java:179)
at py4j.reflection.ReflectionEngine.getConstructor(ReflectionEngine.java:196)
at py4j.Gateway.invoke(Gateway.java:237)
at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80)
at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:748)


So, I'm stuck in this what should I do?










share|improve this question









New contributor




Nour Hene is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.

























    0














    I installed canopy IDE on windows as well as python and pyspark. When executing the code of a program, there was problem of a sparK Context:



    findspark.init()
    conf = SparkConf().setMaster('local').setAppName('MonEssai')
    sc = SparkContext.getOrCreate();
    lines = sc.textFile("file:///PremiéreEssai/ file9.txt")
    fun = lines.flatMap(listsGraph)
    results =fun.collect()

    for result1 in results:
    if(result1):
    if ((result1[0].strip().startswith("sub_"))|(result1[0].strip().startswith("start"))):
    for k in range(0,len(result1)):
    if result1[k] not in Loc:
    Loc.append(result1[k])
    else :
    for j in range(0,len(result1)):
    if result1[j] not in Ext:
    Ext.append(result1[j])

    result3 = sc.parallelize(Ext)
    ExtSimilarity= result3.map(MatchExt).filter(lambda x: x != None).collect()
    #print(ExtSimilarity)

    #print(Loc)
    result3 = sc.parallelize(Loc)
    result9= result3.map(pos_debut)
    result11= result9.map(opcode)
    VectOpcode= result11.flatMapValues(f).flatMap(lambda X: [((X[0],len(X[1])))]).groupByKey().mapValues(list)
    VectOpcode2 = VectOpcode.collect()


    And I got the following error:




    Py4JError: An error occurred while calling
    None.org.apache.spark.api.python.PythonAccumulatorV2. Trace:
    py4j.Py4JException: Constructor
    org.apache.spark.api.python.PythonAccumulatorV2([class
    java.lang.String, class java.lang.Integer, class java.lang.String])
    does not exist




    Py4JErrorTraceback (most recent call last)
    C:Premi�reEssaimaman.py in <module>()
    818 findspark.init()
    819 conf = SparkConf().setMaster('local').setAppName('MonEssai')
    --> 820 sc = SparkContext.getOrCreate();
    821 lines = sc.textFile("file:///PremiéreEssai/ file9.txt")
    822 fun = lines.flatMap(listsGraph)
    C:UsersheneAppDataLocalEnthoughtCanopyedmenvsUserlibsite-packagespysparkcontext.pyc in getOrCreate(cls, conf)
    347 with SparkContext._lock:
    348 if SparkContext._active_spark_context is None:
    --> 349 SparkContext(conf=conf or SparkConf())
    350 return SparkContext._active_spark_context
    351
    C:UsersheneAppDataLocalEnthoughtCanopyedmenvsUserlibsite-packagespysparkcontext.pyc in __init__(self, master, appName, sparkHome, pyFiles, environment, batchSize, serializer, conf, gateway, jsc, profiler_cls)
    116 try:
    117 self._do_init(master, appName, sparkHome, pyFiles, environment, batchSize, serializer,
    --> 118 conf, jsc, profiler_cls)
    119 except:
    120 # If an error occurs, clean up in order to allow future SparkContext creation:
    C:UsersheneAppDataLocalEnthoughtCanopyedmenvsUserlibsite-packagespysparkcontext.pyc in _do_init(self, master, appName, sparkHome, pyFiles, environment, batchSize, serializer, conf, jsc, profiler_cls)
    187 self._accumulatorServer = accumulators._start_update_server(auth_token)
    188 (host, port) = self._accumulatorServer.server_address
    --> 189 self._javaAccumulator = self._jvm.PythonAccumulatorV2(host, port, auth_token)
    190 self._jsc.sc().register(self._javaAccumulator)
    191
    C:UsersheneAppDataLocalEnthoughtCanopyedmenvsUserlibsite-packagespy4jjava_gateway.pyc in __call__(self, *args)
    1523 answer = self._gateway_client.send_command(command)
    1524 return_value = get_return_value(
    -> 1525 answer, self._gateway_client, None, self._fqn)
    1526
    1527 for temp_arg in temp_args:
    C:UsersheneAppDataLocalEnthoughtCanopyedmenvsUserlibsite-packagespy4jprotocol.pyc in get_return_value(answer, gateway_client, target_id, name)
    330 raise Py4JError(
    331 "An error occurred while calling {0}{1}{2}. Trace:n{3}n".
    --> 332 format(target_id, ".", name, value))
    333 else:
    334 raise Py4JError(
    Py4JError: An error occurred while calling None.org.apache.spark.api.python.PythonAccumulatorV2. Trace:
    py4j.Py4JException: Constructor org.apache.spark.api.python.PythonAccumulatorV2([class java.lang.String, class java.lang.Integer, class java.lang.String]) does not exist
    at py4j.reflection.ReflectionEngine.getConstructor(ReflectionEngine.java:179)
    at py4j.reflection.ReflectionEngine.getConstructor(ReflectionEngine.java:196)
    at py4j.Gateway.invoke(Gateway.java:237)
    at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80)
    at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69)
    at py4j.GatewayConnection.run(GatewayConnection.java:238)
    at java.lang.Thread.run(Thread.java:748)


    So, I'm stuck in this what should I do?










    share|improve this question









    New contributor




    Nour Hene is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.























      0












      0








      0


      0





      I installed canopy IDE on windows as well as python and pyspark. When executing the code of a program, there was problem of a sparK Context:



      findspark.init()
      conf = SparkConf().setMaster('local').setAppName('MonEssai')
      sc = SparkContext.getOrCreate();
      lines = sc.textFile("file:///PremiéreEssai/ file9.txt")
      fun = lines.flatMap(listsGraph)
      results =fun.collect()

      for result1 in results:
      if(result1):
      if ((result1[0].strip().startswith("sub_"))|(result1[0].strip().startswith("start"))):
      for k in range(0,len(result1)):
      if result1[k] not in Loc:
      Loc.append(result1[k])
      else :
      for j in range(0,len(result1)):
      if result1[j] not in Ext:
      Ext.append(result1[j])

      result3 = sc.parallelize(Ext)
      ExtSimilarity= result3.map(MatchExt).filter(lambda x: x != None).collect()
      #print(ExtSimilarity)

      #print(Loc)
      result3 = sc.parallelize(Loc)
      result9= result3.map(pos_debut)
      result11= result9.map(opcode)
      VectOpcode= result11.flatMapValues(f).flatMap(lambda X: [((X[0],len(X[1])))]).groupByKey().mapValues(list)
      VectOpcode2 = VectOpcode.collect()


      And I got the following error:




      Py4JError: An error occurred while calling
      None.org.apache.spark.api.python.PythonAccumulatorV2. Trace:
      py4j.Py4JException: Constructor
      org.apache.spark.api.python.PythonAccumulatorV2([class
      java.lang.String, class java.lang.Integer, class java.lang.String])
      does not exist




      Py4JErrorTraceback (most recent call last)
      C:Premi�reEssaimaman.py in <module>()
      818 findspark.init()
      819 conf = SparkConf().setMaster('local').setAppName('MonEssai')
      --> 820 sc = SparkContext.getOrCreate();
      821 lines = sc.textFile("file:///PremiéreEssai/ file9.txt")
      822 fun = lines.flatMap(listsGraph)
      C:UsersheneAppDataLocalEnthoughtCanopyedmenvsUserlibsite-packagespysparkcontext.pyc in getOrCreate(cls, conf)
      347 with SparkContext._lock:
      348 if SparkContext._active_spark_context is None:
      --> 349 SparkContext(conf=conf or SparkConf())
      350 return SparkContext._active_spark_context
      351
      C:UsersheneAppDataLocalEnthoughtCanopyedmenvsUserlibsite-packagespysparkcontext.pyc in __init__(self, master, appName, sparkHome, pyFiles, environment, batchSize, serializer, conf, gateway, jsc, profiler_cls)
      116 try:
      117 self._do_init(master, appName, sparkHome, pyFiles, environment, batchSize, serializer,
      --> 118 conf, jsc, profiler_cls)
      119 except:
      120 # If an error occurs, clean up in order to allow future SparkContext creation:
      C:UsersheneAppDataLocalEnthoughtCanopyedmenvsUserlibsite-packagespysparkcontext.pyc in _do_init(self, master, appName, sparkHome, pyFiles, environment, batchSize, serializer, conf, jsc, profiler_cls)
      187 self._accumulatorServer = accumulators._start_update_server(auth_token)
      188 (host, port) = self._accumulatorServer.server_address
      --> 189 self._javaAccumulator = self._jvm.PythonAccumulatorV2(host, port, auth_token)
      190 self._jsc.sc().register(self._javaAccumulator)
      191
      C:UsersheneAppDataLocalEnthoughtCanopyedmenvsUserlibsite-packagespy4jjava_gateway.pyc in __call__(self, *args)
      1523 answer = self._gateway_client.send_command(command)
      1524 return_value = get_return_value(
      -> 1525 answer, self._gateway_client, None, self._fqn)
      1526
      1527 for temp_arg in temp_args:
      C:UsersheneAppDataLocalEnthoughtCanopyedmenvsUserlibsite-packagespy4jprotocol.pyc in get_return_value(answer, gateway_client, target_id, name)
      330 raise Py4JError(
      331 "An error occurred while calling {0}{1}{2}. Trace:n{3}n".
      --> 332 format(target_id, ".", name, value))
      333 else:
      334 raise Py4JError(
      Py4JError: An error occurred while calling None.org.apache.spark.api.python.PythonAccumulatorV2. Trace:
      py4j.Py4JException: Constructor org.apache.spark.api.python.PythonAccumulatorV2([class java.lang.String, class java.lang.Integer, class java.lang.String]) does not exist
      at py4j.reflection.ReflectionEngine.getConstructor(ReflectionEngine.java:179)
      at py4j.reflection.ReflectionEngine.getConstructor(ReflectionEngine.java:196)
      at py4j.Gateway.invoke(Gateway.java:237)
      at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80)
      at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69)
      at py4j.GatewayConnection.run(GatewayConnection.java:238)
      at java.lang.Thread.run(Thread.java:748)


      So, I'm stuck in this what should I do?










      share|improve this question









      New contributor




      Nour Hene is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.











      I installed canopy IDE on windows as well as python and pyspark. When executing the code of a program, there was problem of a sparK Context:



      findspark.init()
      conf = SparkConf().setMaster('local').setAppName('MonEssai')
      sc = SparkContext.getOrCreate();
      lines = sc.textFile("file:///PremiéreEssai/ file9.txt")
      fun = lines.flatMap(listsGraph)
      results =fun.collect()

      for result1 in results:
      if(result1):
      if ((result1[0].strip().startswith("sub_"))|(result1[0].strip().startswith("start"))):
      for k in range(0,len(result1)):
      if result1[k] not in Loc:
      Loc.append(result1[k])
      else :
      for j in range(0,len(result1)):
      if result1[j] not in Ext:
      Ext.append(result1[j])

      result3 = sc.parallelize(Ext)
      ExtSimilarity= result3.map(MatchExt).filter(lambda x: x != None).collect()
      #print(ExtSimilarity)

      #print(Loc)
      result3 = sc.parallelize(Loc)
      result9= result3.map(pos_debut)
      result11= result9.map(opcode)
      VectOpcode= result11.flatMapValues(f).flatMap(lambda X: [((X[0],len(X[1])))]).groupByKey().mapValues(list)
      VectOpcode2 = VectOpcode.collect()


      And I got the following error:




      Py4JError: An error occurred while calling
      None.org.apache.spark.api.python.PythonAccumulatorV2. Trace:
      py4j.Py4JException: Constructor
      org.apache.spark.api.python.PythonAccumulatorV2([class
      java.lang.String, class java.lang.Integer, class java.lang.String])
      does not exist




      Py4JErrorTraceback (most recent call last)
      C:Premi�reEssaimaman.py in <module>()
      818 findspark.init()
      819 conf = SparkConf().setMaster('local').setAppName('MonEssai')
      --> 820 sc = SparkContext.getOrCreate();
      821 lines = sc.textFile("file:///PremiéreEssai/ file9.txt")
      822 fun = lines.flatMap(listsGraph)
      C:UsersheneAppDataLocalEnthoughtCanopyedmenvsUserlibsite-packagespysparkcontext.pyc in getOrCreate(cls, conf)
      347 with SparkContext._lock:
      348 if SparkContext._active_spark_context is None:
      --> 349 SparkContext(conf=conf or SparkConf())
      350 return SparkContext._active_spark_context
      351
      C:UsersheneAppDataLocalEnthoughtCanopyedmenvsUserlibsite-packagespysparkcontext.pyc in __init__(self, master, appName, sparkHome, pyFiles, environment, batchSize, serializer, conf, gateway, jsc, profiler_cls)
      116 try:
      117 self._do_init(master, appName, sparkHome, pyFiles, environment, batchSize, serializer,
      --> 118 conf, jsc, profiler_cls)
      119 except:
      120 # If an error occurs, clean up in order to allow future SparkContext creation:
      C:UsersheneAppDataLocalEnthoughtCanopyedmenvsUserlibsite-packagespysparkcontext.pyc in _do_init(self, master, appName, sparkHome, pyFiles, environment, batchSize, serializer, conf, jsc, profiler_cls)
      187 self._accumulatorServer = accumulators._start_update_server(auth_token)
      188 (host, port) = self._accumulatorServer.server_address
      --> 189 self._javaAccumulator = self._jvm.PythonAccumulatorV2(host, port, auth_token)
      190 self._jsc.sc().register(self._javaAccumulator)
      191
      C:UsersheneAppDataLocalEnthoughtCanopyedmenvsUserlibsite-packagespy4jjava_gateway.pyc in __call__(self, *args)
      1523 answer = self._gateway_client.send_command(command)
      1524 return_value = get_return_value(
      -> 1525 answer, self._gateway_client, None, self._fqn)
      1526
      1527 for temp_arg in temp_args:
      C:UsersheneAppDataLocalEnthoughtCanopyedmenvsUserlibsite-packagespy4jprotocol.pyc in get_return_value(answer, gateway_client, target_id, name)
      330 raise Py4JError(
      331 "An error occurred while calling {0}{1}{2}. Trace:n{3}n".
      --> 332 format(target_id, ".", name, value))
      333 else:
      334 raise Py4JError(
      Py4JError: An error occurred while calling None.org.apache.spark.api.python.PythonAccumulatorV2. Trace:
      py4j.Py4JException: Constructor org.apache.spark.api.python.PythonAccumulatorV2([class java.lang.String, class java.lang.Integer, class java.lang.String]) does not exist
      at py4j.reflection.ReflectionEngine.getConstructor(ReflectionEngine.java:179)
      at py4j.reflection.ReflectionEngine.getConstructor(ReflectionEngine.java:196)
      at py4j.Gateway.invoke(Gateway.java:237)
      at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80)
      at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69)
      at py4j.GatewayConnection.run(GatewayConnection.java:238)
      at java.lang.Thread.run(Thread.java:748)


      So, I'm stuck in this what should I do?







      python apache-spark pyspark canopy






      share|improve this question









      New contributor




      Nour Hene is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.











      share|improve this question









      New contributor




      Nour Hene is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      share|improve this question




      share|improve this question








      edited Dec 27 at 14:42





















      New contributor




      Nour Hene is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      asked Dec 27 at 14:19









      Nour Hene

      11




      11




      New contributor




      Nour Hene is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.





      New contributor





      Nour Hene is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.






      Nour Hene is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.
























          1 Answer
          1






          active

          oldest

          votes


















          0














          Setting a environment variable called PYTHONPATH = {hadoop_path}/python would help






          share|improve this answer








          New contributor




          Neo is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
          Check out our Code of Conduct.


















            Your Answer






            StackExchange.ifUsing("editor", function () {
            StackExchange.using("externalEditor", function () {
            StackExchange.using("snippets", function () {
            StackExchange.snippets.init();
            });
            });
            }, "code-snippets");

            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "1"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });






            Nour Hene is a new contributor. Be nice, and check out our Code of Conduct.










            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53946519%2fpyspark-py4j-error-using-canopy-pythonaccumulatorv2class-java-lang-string-cl%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            0














            Setting a environment variable called PYTHONPATH = {hadoop_path}/python would help






            share|improve this answer








            New contributor




            Neo is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
            Check out our Code of Conduct.























              0














              Setting a environment variable called PYTHONPATH = {hadoop_path}/python would help






              share|improve this answer








              New contributor




              Neo is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
              Check out our Code of Conduct.





















                0












                0








                0






                Setting a environment variable called PYTHONPATH = {hadoop_path}/python would help






                share|improve this answer








                New contributor




                Neo is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                Check out our Code of Conduct.









                Setting a environment variable called PYTHONPATH = {hadoop_path}/python would help







                share|improve this answer








                New contributor




                Neo is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                Check out our Code of Conduct.









                share|improve this answer



                share|improve this answer






                New contributor




                Neo is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                Check out our Code of Conduct.









                answered 2 days ago









                Neo

                1




                1




                New contributor




                Neo is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                Check out our Code of Conduct.





                New contributor





                Neo is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                Check out our Code of Conduct.






                Neo is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                Check out our Code of Conduct.






















                    Nour Hene is a new contributor. Be nice, and check out our Code of Conduct.










                    draft saved

                    draft discarded


















                    Nour Hene is a new contributor. Be nice, and check out our Code of Conduct.













                    Nour Hene is a new contributor. Be nice, and check out our Code of Conduct.












                    Nour Hene is a new contributor. Be nice, and check out our Code of Conduct.
















                    Thanks for contributing an answer to Stack Overflow!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    To learn more, see our tips on writing great answers.





                    Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


                    Please pay close attention to the following guidance:


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53946519%2fpyspark-py4j-error-using-canopy-pythonaccumulatorv2class-java-lang-string-cl%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Angular Downloading a file using contenturl with Basic Authentication

                    Olmecas

                    Can't read property showImagePicker of undefined in react native iOS