The repeater used in this project: With the hardware being ready, I just had to sit and watch how my database was getting filled with new unique (not randomized) MAC addresses. With this device I was able to target over 526 MAC addresses in 2 days. Each MAC address had an RSSI value which indicated the proximity of the devices. Download the latest versions of Spark AR Studio and the Spark AR Player. Spark AR Studio. Our creative compositing tool for building AR experiences. Spark AR Studio for Windows. Sample files for the effect project in Using Particles. Working with Textures and Materials: Sample files for the Working with Textures and Materials tutorial.
- Install Latest Apache Spark on Mac OS
- Step 4 : Install Spark
Install Latest Apache Spark on Mac OS
Following is a detailed step by step process to install latest Apache Spark on Mac OS. We shall first install the dependencies : Java and Scala. To install these programming languages and framework, we take help of Homebrew and xcode-select.
Step 1 : Install Homebrew
Open Terminal. Run the following command in Terminal :
/usr/bin/ruby -e “$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)” Enter the password if asked and continue.
>Cleaning up/Library/Caches/Homebrew..>Migrating/Library/Caches/Homebrew to/Users/apple/Library/Caches/Homebrew.Already up-to-date.>Homebrew has enabled anonymous aggregate user behaviour analytics.Read the analytics documentation (andhow toopt-out) here:-Run `brew help` toget startedhttp://docs.brew.shStep 2 : Install xcode-select
To install Java, Scala and Apache Spark through command line interface in Terminal, we shall install xcode-select. Enter and run the following command in Terminal :
xcode-select –install Step 2 : Install Java
To install Java through command line, enter and run the following command in the Terminal :
brew cask install java Ifyour Java application still asks forJRE installation, you might needInstalling thisCask means you have AGREED tothe Oracle Binary Codehttps://www.oracle.com/technetwork/java/javase/terms/license/index.html>Satisfying dependencies>Downloading http://download.oracle.com/otn-pub/java/jdk/8u144-b01/090f390dda######################################################################## 100.0%>Installing Cask java>Running installer forjava; your password may be necessary.>Packageinstallers may write toany location; options such as--appdir arei>installer:Packagename isJDK8Update144>installer:The upgrade was successful.Step 3 : Install Scala
To install Scala through command line, enter and run the following command in Terminal :
brew install scala apples-MBP:~Prasanth$ brew install scala>Auto-updated Homebrew!>Updated Formulae>Downloading https://downloads.lightbend.com/scala/2.12.3/scala-2.12.3.tgz######################################################################## 100.0%>Downloading https://raw.githubusercontent.com/scala/scala-tool-support/0a217######################################################################## 100.0%Tousewith IntelliJ, set the Scala home to:/usr/local/etc/bash_completion.d? /usr/local/Cellar/scala/2.12.3:44files, 20.1MB, built in1minute1secondStep 4 : Install Spark
To install Apache Spark through command line, enter and run the following command in the Terminal :
brew install apache-spark apples-MBP:~Prasanth$ brew install apache-spark>Downloading https://www.apache.org/dyn/closer.lua?path=spark/spark-2.2.0/spa>Best Mirror http://www-eu.apache.org/dist/spark/spark-2.2.0/spark-2.2.0-bin-######################################################################## 100.0%? /usr/local/Cellar/apache-spark/2.2.0:1,318files, 221.5MB, built in12minutes8secondsStep 5 : Verifying installation
To verify if the installation is successful, run the spark using the following command in Terminal :
spark-shell apples-MBP:~Prasanth$ spark-shellUsing Spark's default log4j profile: org/apache/spark/log4j-defaults.propertiesTo adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).17/08/01 21:52:51 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform.. using builtin-java classes where applicable17/08/01 21:52:58 WARN ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.017/08/01 21:52:58 WARN ObjectStore: Failed to get database default, returning NoSuchObjectException17/08/01 21:52:59 WARN ObjectStore: Failed to get database global_temp, returning NoSuchObjectExceptionSpark context Web UI available at http://192.168.1.101:4040Spark context available as 'sc' (master = local[*], app id = local-1501604572582).Welcome to/ _/_ _ __/ /_/_/ ._/,// // version 2.2.0Using Scala version2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java1.8.0_144)Type:help formore information.scala>
We have successfully installed Apache Spark on Mac OS.
The installation directory would be /usr/local/Cellar/apache-spark/.
Conclusion :
In this Apache Spark Tutorial, we have learnt to install latest Apache Spark on Mac OS .
Introduction
This tutorial will teach you how to set up a full development environment for developing and debugging Spark applications. For this tutorial we'll be using Java, but Spark also supports development with Scala, Python and R.
We'll be using IntelliJ as our IDE, and since we're using Java we'll use Maven as our build manager. By the end of the tutorial, you'll know how to set up IntelliJ, how to use Maven to manage dependencies, how to package and deploy your Spark application to a cluster, and how to connect your live program to a debugger.
Prerequisites
- Downloaded and deployed the Hortonworks Data Platform (HDP) Sandbox
- Installed IntelliJ IDEA
- Installed Maven dependency manager
- Installed Java
Outline
Creating a new IntelliJ Project
NOTE: Instructions may vary based on operating system. On this tutorial we will be using IntelliJ Version: 2018.2.1 on MAC OSX High Sierra.
Create a new project by selecting File > New > Project:
select Maven > click Next
Name your project as follows:
- GroupId:
Hortonworks
- ArtifactId:
SparkTutorial
- Version:
1.0-SNAPSHOT
then click Next to continue.
Finally, select your project name and location. These fields should be auto-populated so let's run the defaults:
IntelliJ should make a new project with a default directory structure. It may take a minute or two to generate all the folders.
Let's break down the project structure.
- .idea: These are IntelliJ configuration files.
- src: Source Code. Most of your code should go into the main directory. The test folder should be reserved for test scripts.
- target: When you compile your project it will go here.
- pom.xml: The Maven configuration file. We'll show you how to use this file to import third party libraries and documentation.
Before we continue, let's verify a few IntelliJ settings:
1. Verify that import Maven projects automatically is ON.
- Preferences > Build, Execution, Deployment > Build Tools > Maven > Importing
2. Verify Project SDK and Project language level are set to Java version:
- File > Project Structure > Project
3. Verify Language level is set to Java version:
- File > Project Structure > Modules
Maven Configuration
Before we start writing a Spark Application, we'll want to import the Spark libraries and documentation into IntelliJ. To perform this we're going to use Maven. This is necessary if we want IntelliJ to recognize Spark code. To import the Spark libraries we're going to use the dependency manager Maven. Add the following lines to the file pom.xml:
After you save the file, IntelliJ will automatically import the libraries and documentation needed to run Spark.
Create a Spark Application
For our first application we're going to build a simple program that performs a word count on the collected works of Shakespeare - Download the file, save the file as shakespeare.txt.
Later we'll want Spark to retrieve this file from HDFS (Hadoop Distributed File System), so let's place it there now.
To upload to HDFS, first make sure the sandbox is up and running.
- Navigate to sandbox-hdp.hortonworks.com:8080
- Login using username/password as maria_dev / maria_dev
- Once you've logged into Ambari Manager, mouse over the drop-down menu on the upper-right hand corner and click on Files View.
- Open the tmp folder and click the upload button in the upper-right corner to upload the file. Make sure it's named shakespeare.txt.
Now we're ready to create our application. In your IDE open the folder src/main/resources, which should have been generated automatically for you. Place shakespeare.txt there.
Next, select folder src/main/java:
- right-click on folder and select New > Java Class
- name the class:
Main.java
Copy this into your new file: Ios 9 download android.
Now go to the 'Run' drop down menu at the top of your IDE and select run. Then select Main. If everything is set up correctly, the IDE should print 'Hello World'.
Now that we know the environment is set up correctly, replace the file with this code:
As before, click Run > Run to run the file. This should run the Spark job and print the frequency of each word that appears in Shakespeare.
The files created by your program are found in the directory specified in the code above, in our case we used /tmp/shakespeareWordCount.
Notice we've set this line:
This tells Spark to run locally using this computer, rather than in distributed mode. To run Spark against multiple machines, we would need to change this value to YARN. We'll see how to do this later.
We've now seen how to deploy an application directly in an IDE. This is a good way to quickly build and test an application, but it is somewhat unrealistic since Spark is only running on a single machine. In production, Spark will usually process data stored on a distributed file system like HDFS (or perhaps S3 or Azure Blog Storage if running in the cloud). Spark is also usually run in clustered mode (ie, distributed across many machines).
Project Spark Torrent
In the next two sections we'll learn how to deploy distributed Spark applications. First we'll learn how to deploy Spark against the Hortonworks sandbox, which is a single-node Hadoop environment, and then we'll learn how to deploy Spark in the cloud.
Deploying to the Sandbox
In this section we'll be deploying against the Hortonworks sandbox. Although we're still running Spark on a single machine, we'll be using HDFS and YARN (a cluster resource manager). This will be a closer approximation of running a full distributed cluster over what we've done previously.
The first thing we want to do is change this line:
to this:
and this:
Spark Mac Email
to this:
This tells Spark to read and write to HDFS instead of locally. Make sure to save the file.
Next, we're going to package this code into a compiled jar file that can be deployed on the sandbox. To make our lives easier, we're going to create an assembly jar: a single jar file that contains both our code and all jars our code depends on. By packaging our code as an assembly we guarantee that all dependency jars (as defined in pom.xml) will be present when our code runs.
Open up a terminal and cd to the directory that contains pom.xml. Run mvn package. This will create a compiled jar called 'SparkTutorial-1.0-SNAPSHOT.jar' in the folder target.
NOTE: if the mvn command does not work - make sure you installed Maven successfully.
Copy the assembly over to the sandbox:
Open a second terminal window and ssh into the sandbox:
Use spark-submit to run our code. We need to specify the main class, the jar to run, and the run mode (local or cluster):
Your console should print the frequency of each word that appears in Shakespeare, like this:
Additionally, if you open the File View in Ambari you should see results under /tmp/shakespeareWordCount. This shows the results have also been stored in HDFS.
Deploying to the Cloud
In this section we'll learn how to deploy our code to a real cluster. If you don't have a cluster available you can quickly set one up using Hortonworks Cloud Solutions.
These services are designed to let you quickly spin up a cluster for a few hours (perhaps on cheaper spot instances), run a series of jobs, then spin the cluster back down to save money. If you want a permanent installation of Hadoop that will run for months without being shutdown, you should download Hortonworks Data Platform and install on your servers.
After setting up a cluster the process of deploying our code is similar to deploying to the sandbox. We need to scp the jar to the cluster:
Then open a second terminal window and ssh into the master node:
Then use spark-submit to run our code:
Notice that we specified the parameters --master yarn instead of --master local. --master yarn means we want Spark to run in a distributed mode rather than on a single machine, and we want to rely on YARN (a cluster resource manager) to fetch available machines to run the job. If you aren't familiar with YARN, it is especially important if you want to run several jobs simultaneously on the same cluster. When configured properly, a YARN queue will provide different users or process a quota of cluster resources they're allowed to use. It also provides mechanisms for allowing a job to take full use of the cluster when resources are available and scaling existing jobs down when additional users or jobs begin to submit jobs.
The parameter --deploy-mode client indicates we want to use the current machine as the driver machine for Spark. The driver machine is a single machine that initiates a Spark job, and is also where summary results are collected when the job is finished. Alternatively, we could have specified --deploy-mode cluster, which would have allowed YARN to choose the driver machine.
It's important to note that a poorly written Spark program can accidentally try to bring back many Terabytes of data to the driver machine, causing it to crash. For this reason you shouldn't use the master node of your cluster as your driver machine. Many organizations submit Spark jobs from what's called an edge node, which is a separate machine that isn't used to store data or perform computation. Since the edge node is separate from the cluster, it can go down without affecting the rest of the cluster. Edge nodes are also used for data science work on aggregate data that has been retrieved from the cluster. For example, a data scientist might submit a Spark job from an edge node to transform a 10 TB dataset into a 1 GB aggregated dataset, and then do analytics on the edge node using tools like R and Python. If you plan on setting up an edge node, make sure that machine doesn't have the DataNode or HostManager components installed, since these are the data storage and compute components of the cluster. You can check this on the host tab in Ambari.
Live Debugging
In this section we'll learn how to connect a running Spark program to a debugger, which will allow us to set breakpoints and step through the code line by line. Debugging Spark is done like any other program when running directly from an IDE, but debugging a remote cluster requires some configuration.
On the machine where you plan on submitting your Spark job, run this line from the terminal:
This will let you attach a debugger at port 8086. You'll need to make sure port 8086 is able to receive inbound connections. Then in IntelliJ go to Run > Edit Configurations:
Then click the + button at the upper-left and add a new remote configuration. Ncaa football 14 download. Fill the host and port fields with your host ip address and use port 8086.
If you run this debug configuration from your IDE immediately after submitting your Spark job, the debugger will attach and Spark will stop at breakpoints.
NOTE: To resubmit the word count code we must first remove the directory created earlier. Use the command hdfs dfs -rm -r /tmp/shakespeareWordCount
on the sandbox shell to remove the old directory.
Submit the spark job on the sandbox shell again and debug on your IDE immediately after executing the spark-submit command.
Mac Spark Install
Remember that to run your code we used the following command: spark-submit --class 'Hortonworks.SparkTutorial.Main' --master yarn --deploy-mode client ./SparkTutorial-1.0-SNAPSHOT.jar
You can also inspect the values of live variables within your program. This is invaluable when trying to pin down bugs in your code.
Summary
We have installed the needed dependencies and Integrated Developing Environment needed to develop and deploy your code to the sandbox, we also learned how easy it can be to deploy your code to the cloud or an external cluster. Additionally, we enabled live code debugging to fix bugs and improve your code in a realistic setting.