Some guy's blog
Spark Submit is great and you should use it.
I recently was answering a Stack Overflow Question which made me start thinking a bit about locking and some assumptions made in Distributed systems. In this case we had what I find is a pretty common error in distributed systems and particularly with Cassandra.
Are the Spark Cassandra Connector Api’s Async?
I was looking through a pile of Jiras and I noticed an interesting complaint that DataFrame pruning was broken for the Spark Cassandra Connector. The ticket noted that even when very specific columns were selected, it seemed like the Connector was pulling all of the rows from the source Cassandra table. This is surprising, since that particular part of the connector code has some rather heavy testing coverage and there haven’t been any comments on this feature not working from anyone else. Compared to predicate pushdown, pruning is easy so what went wrong?
Making sure your code is actually pushing down predicates to C* is slightly confusing. In this post we’ll go over the basics on setting up debugging and how to workaround a few common issues.
Talking to Multiple Clusters, Now with Spark SQL
Classpaths are almost always the first error folks run into when writing custom applications for Spark. The difficult usually centers around the fact that there are many Spark processes and their special class-loaders. Most folks get around these issue by building Fat Jars with sbt assembly but not everyone needs to do this.
Most folks don’t know that the Spark Cassandra Connector is actually able to connect to multiple Cassandra clusters at the same time. This allows us to move data between Cassandra clusters or even manage multiple clusters from the same application (or even the spark shell)
Spark Loves Distributed filesystems, but sometimes you just want to write to wherever the driver is
running. You may try use a
file:// or something of that nature and run into a lot of strange errors
or files located in random places. Never fear there is a simple solution with
We just fixed a bug
which was stopping
DataFrames from being able to write into Cassandra UDTs. But I noticed there
aren’t a lot of great documents around how this works. Here is just a quick example on how you can
make a dataframe which can insert into a C* UDT.