当前位置: 代码网 > 科技>人工智能>数据分析 > Hadoop简单应用程序实例

Hadoop简单应用程序实例

2024年08月04日 数据分析 我要评论
Hadoop是一个分布式系统基础架构,主要用于大数据的存储和处理。它允许使用简单的编程模型跨集群处理和生成大数据集。Hadoop主要由HDFS(Hadoop Distributed FileSystem,分布式文件系统)和MapReduce编程模型两部分组成。

hadoop是一个分布式系统基础架构,主要用于大数据的存储和处理。它允许使用简单的编程模型跨集群处理和生成大数据集。hadoop主要由hdfs(hadoop distributed filesystem,分布式文件系统)和mapreduce编程模型两部分组成。 

 

准备工作

首先查看数据集(一小部分数据和示例)

 

配置pom文件和建包

<?xml version="1.0" encoding="utf-8"?>
<project xmlns="http://maven.apache.org/pom/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/xmlschema-instance"
         xsi:schemalocation="http://maven.apache.org/pom/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelversion>4.0.0</modelversion>

    <groupid>org.example</groupid>
    <artifactid>stock_daily</artifactid>
    <version>1.0-snapshot</version>

    <properties>
        <maven.compiler.source>8</maven.compiler.source>
        <maven.compiler.target>8</maven.compiler.target>
        <project.build.sourceencoding>utf-8</project.build.sourceencoding>
    </properties>
    <dependencies>
        <!-- https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-hdfs -->
        <dependency>
            <groupid>org.apache.hadoop</groupid>
            <artifactid>hadoop-hdfs</artifactid>
            <version>3.1.2</version>
            <scope>test</scope>
        </dependency>
        <!-- https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-client -->
        <dependency>
            <groupid>org.apache.hadoop</groupid>
            <artifactid>hadoop-client</artifactid>
            <version>3.1.2</version>
        </dependency>
        <!-- https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-common -->
        <dependency>
            <groupid>org.apache.hadoop</groupid>
            <artifactid>hadoop-common</artifactid>
            <version>3.1.2</version>
        </dependency>
        <dependency>
            <groupid>org.apache.commons</groupid>
            <artifactid>commons-configuration2</artifactid>
            <version>2.7</version>
        </dependency>
        <dependency>
            <groupid>org.slf4j</groupid>
            <artifactid>slf4j-api</artifactid>
            <version>1.7.30</version>
        </dependency>
        <dependency>
            <groupid>org.apache.hadoop</groupid>
            <artifactid>hadoop-common</artifactid>
            <version>3.1.2</version>
        </dependency>
    </dependencies>
</project>

代码

创建一个类继承configured实现tool接口,configured类可以帮助hadoop命令行工具管理配置文件,如yarn-site.xmlmapred-site.xmltool接口中的toolrunner是程序的入口点,可以启动run方法并把命令行的参数传给run

重写run方法,创建job和配置mapreduce类。这个configurition就是用来管理hadoop的配置文件的类。args是命令行输入的参数,虚拟机会把它读进来。

mapper类以及map方法

mapper类会将文件按行切分,然后把每一行的字节偏移量作为建,每一行的数据作为值交给map方法。map方法把里面的内容取出来求下行指数,下行指数=((收盘价-开盘价) / (收盘价 - 最低价+1)然后将股票代码作为键,每一行的下行指数作为值写入context中,作为后面reduce的输出。context.write用于写入输出数据。

reduce类和reduce方法

shuffile会把map输出文件下载过来,然后会自动根据键,聚合到一个容器里面,遍历求和并计算平均的下行指数即可。

完整代码

package com.zlh;

import org.apache.hadoop.conf.configuration;
import org.apache.hadoop.conf.configured;
import org.apache.hadoop.fs.path;
import org.apache.hadoop.io.doublewritable;
import org.apache.hadoop.io.longwritable;
import org.apache.hadoop.io.text;
import org.apache.hadoop.mapreduce.job;
import org.apache.hadoop.mapreduce.mapper;
import org.apache.hadoop.mapreduce.reducer;
import org.apache.hadoop.mapreduce.lib.input.textinputformat;
import org.apache.hadoop.mapreduce.lib.output.textoutputformat;
import org.apache.hadoop.util.tool;
import org.apache.hadoop.util.toolrunner;
import java.io.ioexception;

/**
 * calculate and output the code and danger values of stock.
 * */
public class stock_daily extends configured implements tool {
    /**
     * the entrance of the program.
     * @param args is used as the parameter of run method.
     * */
    public static void main(string[] args) throws exception {
        //run the stock_daily as a mapreduce job.
        int res = toolrunner.run(new stock_daily(),args);
        //close the jvm
        system.exit(res);
    }//of main

    /**
     * construct job and execute the job.
     * @param args the given string args.
     * */
    @override
    public int run(string[] args) throws exception {
        //set configure parameter information of hadoop
        configuration conf = new configuration();

        //construct job class
        system.out.println("创建和配置job");
        job job = job.getinstance(conf,"stock_daily");

        //indicate the class of the job
        job.setjarbyclass(stock_daily.class);

        //indicate the class of the map and reduce
        job.setmapperclass(map.class);
        job.setreducerclass(reduce.class);
        job.setcombinerclass(reduce.class);

        //indicate the format of the input:text type file
        job.setinputformatclass(textinputformat.class);
        textinputformat.addinputpath(job, new path(args[0]));

        //indicate the format of the output:key is text,value is double.
        job.setoutputformatclass(textoutputformat.class);
        job.setoutputkeyclass(text.class);
        job.setoutputvalueclass(doublewritable.class);
        textoutputformat.setoutputpath(job,new path(args[1]));

        //execute the mapreduce
        boolean res = job.waitforcompletion(true);
        if(res){
            return 0;
        }//of if
        else{
            return -1;
        }//of else
    }//of run

    /**
     * the map class is used to dispose the data to many lines as the input of the method.
     * */
    public static class map extends mapper<longwritable, text, text, doublewritable>{
        //define the map output key and value.
        private final static doublewritable downindex = new doublewritable();
        private text stock = new text();

        /**
         * use each line's stock_code as key,downindex as the value
         * @param key the byte offset of every line.
         * @param value text values.
         * @param context the program context.
         * */
        @override
        public void map(longwritable key, text value, context context)
                throws ioexception, interruptedexception{
            //split line to calculate the falling index
            string[] fields = value.tostring().split("\t");
            stock.set(fields[0]);
            double openprice=double.parsedouble(fields[2]);
            double closeprice=double.parsedouble(fields[3]);
            double lowprice=double.parsedouble(fields[5]);
            downindex.set((closeprice-openprice)-(closeprice-lowprice+1));
            context.write(stock,downindex);
        }//of map
    }//of class map

    /**
     * the reduce is used to calculate the output result.
     * */
    public static class reduce extends reducer<text, doublewritable, text, doublewritable>{
        /**
         * output the avg downindex of every stock code.
         * @param key the output key of mapper
         * @param values output values of mapper
         * @param context the context of mapreduce.
         * */
        public void reduce (text key, iterable<doublewritable> values, context context) throws ioexception, interruptedexception {
            double sum=0;
            int nums = 0;
            //traverse the iterable values and sum of them
            for (doublewritable value : values) {
                sum += value.get();
                nums++;
            }//of while
            context.write(key,new doublewritable(sum/nums));
        }//of reduce
    }//of class reduce
}//of class stock_daily

 上传集群并执行

将项目文件打包为jar包上传至hadoop集群中(打包方式参照hadoop应用1

windows的命令提示符里面使用pscp命令上传jar包(前提是已经安装了putty

文件夹也可以通过这个方式传,要在pscp后面加个-r

启动集群后使用hadoop jar 输入文件位置(要在hdfs里面,不是在linux里面) 输出文件目录,会报找不到类的错,要修改两个配置文件。

1、mapred-site.xml
增加两个配置:
<property>
   <name>mapreduce.admin.user.env</name>
   <value>hadoop_mapred_home=$hadoop_common_home</value>
</property>
<property>
   <name>yarn.app.mapreduce.am.env</name>
   <value>hadoop_mapred_home=$hadoop_common_home</value>
</property>

2、yarn-site.xml
增加container本地日志查看配置
<property>  
  <name>yarn.nodemanager.log-dirs</name>  
  <value>hadoop安装目录/logs/userlogs</value>  
</property>
<property>  
  <name>yarn.nodemanager.log.retain-seconds</name>  
  <value>108000</value>  
</property>
<property>
  <name>yarn.nodemanager.resource.memory-mb</name>
  <value>2048</value>	<!--此项小于1536,mapreduce程序会报错-->
</property>
<property>
  <name>yarn.scheduler.maximum-allocation-mb</name>
  <value>2048</value>   <!--防止一级调度器请求资源量过大-->
</property>

设置虚拟内存与内存的倍率,防止vm不足container被kill
<property>  
  <name>yarn.nodemanager.vmem-pmem-ratio</name>  
  <value>3</value>  
</property>




以上配置确认无误后,如果仍有报内存错误、am错误、卡job、卡0%等问题找不到原因,可以尝试按以下方式解决:
(相应属性的设置为ha模式设置)

(1)mapred-site.xml
将mapreduce.framework.name改为:
------------------------------------
vix.mapreduce.framework.name
yarn
------------------------------------

(2)yarn-site.xml
将yarn.resourcemanager.address改为:
------------------------------------
vix.yarn.resourcemanager.address
主节点地址:18040
------------------------------------

将yarn.resourcemanager.scheduler.address改为:
------------------------------------
vix.yarn.resourcemanager.scheduler.address
主节点地址:18030
------------------------------------

文件位置以及路径如下图所示

修改之后把文件传到另外两个节点,然后重新启动集群

然后执行jar包(要先把数据上传到hadoop集群中,使用hdfs dfs -put命令)

 试验运行过程及结果

 ps:hadoop执行jar包出现问题可以在日志文件里面找报错。在logs里面的resourcemanager里面。

(0)

相关文章:

版权声明:本文内容由互联网用户贡献,该文观点仅代表作者本人。本站仅提供信息存储服务,不拥有所有权,不承担相关法律责任。 如发现本站有涉嫌抄袭侵权/违法违规的内容, 请发送邮件至 2386932994@qq.com 举报,一经查实将立刻删除。

发表评论

验证码:
Copyright © 2017-2025  代码网 保留所有权利. 粤ICP备2024248653号
站长QQ:2386932994 | 联系邮箱:2386932994@qq.com