- Linux environment: Unbuntu 18.04 LTS
- Java JDK 8
- ssh and pdsh
- Hadoop distribution s References: https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html
Goal: Start successful daemon:
- ResourceMananger
- NodeManager
- Namenode
- SecondaryNamenode
- Datanode
$ sudo apt-get install openjdk-8-jdk
$ sudo apt-get install ssh
$ sudo apt-get install pdsh
$ ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
$ chmod 0600 ~/.ssh/authorized_keys
Note:
pdsh
usesrsh
by default, notssh
. So, add a line to the end of file~/.bashrc
:
export PDSH_RCMD_TYPE=ssh
- Set variable in
~/.bashrc
:
export JAVA_HOME=/usr/java/latest
export HADOOP_HOME=/path/to/hadoop
export PATH=${HADOOP_HOME}/bin:${HADOOP_HOME}/sbin:${PATH}
- Set variable in
${HADOOP_HOME}/etc/hadoop/hadoop-env.sh
:
Uncomment JAVA_HOME
and set:
export JAVA_HOME=/usr/java/latest
Note:
-
/usr/java/latest
can beusr/liv/jvm/java-1.8.0-openjdk-amd64
-
All terminal command is executed in directory
$HADOOP_HOME
Edit files:
${HADOOP_HOME}/etc/hadoop/core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/username/tmp</value>
</property>
</configuration>
Note: hadoop.tmp.dir
value is where Hadoop namenode, datanode and namenode secondary store its data, by default is /tmp/
dir, it is deleted after restart
${HADOOP_HOME}/etc/hadoop/hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
${HADOOP_HOME}/etc/hadoop/mapreduce-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.application.classpath</name>
<value>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*:$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*</value>
</property>
</configuration>
${HADOOP_HOME}/etc/hadoop/yarn-site.xml
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.env-whitelist</name>
<value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_HOME,PATH,LANG,TZ,HADOOP_MAPRED_HOME</value>
</property>
</configuration>
- Format the filesystem for first time:
$ hdfs namenode -format
- Start ResourceManager daemon and NodeManager daemon:
$ start-yarn.sh
- Start NameNode daemon and DataNode daemon:
$ start-dfs.sh
Note: You can use command start-all.sh
to start all daemons
Check for working:
jps
Create a WordCount.java file:
import java.io.IOException;
import java.util.StringTokenizer;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
public class WordCount {
public static class TokenizerMapper
extends Mapper<Object, Text, Text, IntWritable>{
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(Object key, Text value, Context context
) throws IOException, InterruptedException {
StringTokenizer itr = new StringTokenizer(value.toString());
while (itr.hasMoreTokens()) {
word.set(itr.nextToken());
context.write(word, one);
}
}
}
public static class IntSumReducer
extends Reducer<Text,IntWritable,Text,IntWritable> {
private IntWritable result = new IntWritable();
public void reduce(Text key, Iterable<IntWritable> values,
Context context
) throws IOException, InterruptedException {
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
result.set(sum);
context.write(key, result);
}
}
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
Job job = Job.getInstance(conf, "word count");
job.setJarByClass(WordCount.class);
job.setMapperClass(TokenizerMapper.class);
job.setCombinerClass(IntSumReducer.class);
job.setReducerClass(IntSumReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
To compile WordCount.java file above, we need set classpath to hadoop classpath, by following step:
- Get all hadoop classpath:
hadoop classpath
- Copy all hadoop classpath, edit
~/.bashrc
file:
export CLASSPATH='content of hadoop classpath'
- Apply the change:
source ~/.bashrc
Complile WordCount.java:
javac WordCount.java
After compile, it generates 3 classes file: WordCount.class
, WordCount$TokenizerMapper.class
, WordCount$IntSumReducer.class
.
Create a jar file from these classes:
jar cf wc.jar WordCount*.class
Note: Các command được thực hiện trên working directory: $HADOOP_HOME
- Create HDFS directories:
$ bin/hdfs dfs -mkdir -p /user/
- Copy files or directories from your local file system to Hadoop's HDFS
bin/hdfs dfs -put from_local to_hdfs
Create txt file in local and add some content:
touch wc_data.txt
nano wc_data.txt
# Add some content and close file
Upload data from local to HDFS:
hadoop fs -mkdir -p /user/username/
hadoop fs -put wc_data.txt /user/username/wc_data.txt
hadoop jar wc.jar WordCount /user/username/wc_data.txt /user/username/output
yarn jar wc.jar WordCount /user/username/wc_data.txt /user/username/output
When you’re done, stop the daemons with:
stop-yarn.sh
stop-dfs.sh
You can see the namenode, datanode and mapreduce job status in web interface
- ResourceManager - http://localhost:8088/
- NameNode - http://localhost:9870/