草庐IT

Hadoop基础-12-Hive

孙中明 2023-03-28 原文
源码见:https://github.com/hiszm/hadoop-train

Hive概述

http://hive.apache.org/

  • Hive是什么
The Apache Hive ™ data warehouse software facilitates reading, writing, and managing large datasets residing in distributed storage using SQL. Structure can be projected onto data already in storage. A command line tool and JDBC driver are provided to connect users to Hive.

Hive 是一个构建在 Hadoop 之上的数据仓库,它可以将结构化的数据文件映射成表,并提供类 SQL 查询功能,用于查询的 SQL 语句会被转化为 MapReduce 作业,然后提交到 Hadoop 上运行。

  • 为什么要使用Hive
    • MappReduce 编程不便
    • 同时也要RDBMS关系型数据库
    • HDFS上没有schema的概念
(schema就是数据库对象的集合 , 所谓的数据库对象也就是常说的表,索引,视图,存储过程等。)

  • Hive特点
  1. 简单、容易 上手 (提供了类似 SQL 的查询语言 HQL),使得精通 sql 但是不了解 Java 编程的人也能很好地进行大数据分析;
  2. 灵活 性高,底层引擎支持 MR/ Tez /Spark;
  3. 为超大的数据集设计的计算和存储能力,集群扩展容易;
  4. 统一的元数据管理,可与prestoimpalasparksql共享 数据;
  5. 执行延迟高,不适合做数据的实时处理,但适合做海量数据的 离线 处理。

Hive体系架构

  • client : shell , jdbc, webUI(zeppelin)
  • metastore : 指数据库中的元数据

Hive部署架构

Hive与RDBMS的区别

  Hive RDBMS
查询语言 Hive SQL SQL
数据储存 HDFS Raw Device or Local FS
索引 无(支持比较弱)
执行 MapReduce、 Tez Excutor
执行时延 高,离线 低 , 在线
数据规模 非常大, 大

Hive部署

  • 获得wget hive-1.1.0-cdh5.15.1.tar.gz(url)
  • 解压 tar -zxvf hive-1.1.0-cdh5.15.1.tar.gz -C ~/app/
  • 配置环境变量
export HIVE_HOME=/home/hadoop/app/hive-1.1.0-cdh5.15.1 export PATH=$HIVE_HOME/bin:$PATH
  • 生效 source ~/.bash_profile
[hadoop@hadoop000 app]$ source ~/.bash_profile [hadoop@hadoop000 app]$ echo $HIVE_HOME /home/hadoop/app/hive-1.1.0-cdh5.15.1
  • 修改配置
/conf/hive-env.sh
增加一行

/conf/hive-site.xml
添加文件

[hadoop@hadoop000 conf]$ cat hive-site.xml <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://hadoop000:3306/hadoop_hive?createDatabaseIfNotExist=true</value> </property> <property> <name>javax.jdo.option.ConnectionDriverName</name> <value>com.mysql.jdbc.Driver</value> </property> <property> <name>javax.jdo.option.ConnectionUserName</name> <value>root</value> </property> <property> <name>javax.jdo.option.ConnectionPassword</name> <value>root</value> </property> </configuration> [hadoop@hadoop000 conf]$
  • MySQL驱动 mysql-connector-java-5.1.27-bin.jar
拷贝到目录home/hadoop/app/hive-1.1.0-cdh5.15.1/lib

  • 安装数据库 用yum 安装
[hadoop@hadoop000 lib]$ mysql -uroot -proot Warning: Using a password on the command line interface can be insecure. Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 2 Server version: 5.6.42 MySQL Community Server (GPL) Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql>

Hive快速入门

  • 启动hive
[hadoop@hadoop000 sbin]$ jps 3218 SecondaryNameNode 3048 DataNode 3560 NodeManager 3451 ResourceManager 2940 NameNode 3599 Jps hive> create database test > ; OK

select * from DBS\G;

mysql> mysql> select * from DBS\G; *************************** 1. row *************************** DB_ID: 1 DESC: Default Hive database DB_LOCATION_URI: hdfs://hadoop000:8020/user/hive/warehouse NAME: default OWNER_NAME: public OWNER_TYPE: ROLE *************************** 2. row *************************** DB_ID: 3 DESC: NULL DB_LOCATION_URI: hdfs://hadoop000:8020/user/hive/warehouse/hive.db NAME: hive OWNER_NAME: hadoop OWNER_TYPE: USER *************************** 3. row *************************** DB_ID: 4 DESC: NULL DB_LOCATION_URI: hdfs://hadoop000:8020/test/location NAME: hive2 OWNER_NAME: hadoop OWNER_TYPE: USER *************************** 4. row *************************** DB_ID: 6 DESC: NULL DB_LOCATION_URI: hdfs://hadoop000:8020/user/hive/warehouse/test.db NAME: test OWNER_NAME: hadoop OWNER_TYPE: USER 4 rows in set (0.00 sec) ERROR: No query specified

Hive DDL

Hive DDL=Hive Data Definition Language

数据库操作

  • Create Database
CREATE (DATABASE|SCHEMA) [IF NOT EXISTS] database_name [COMMENT database_comment] [LOCATION hdfs_path] [MANAGEDLOCATION hdfs_path] [WITH DBPROPERTIES (property_name=property_value, ...)]; Drop Database

DROP (DATABASE|SCHEMA) [IF EXISTS] database_name [RESTRICT|CASCADE]; hive> create DATABASE hive_test; OK Time taken: 0.154 seconds hive> 再HDFS上的 默认路径 /user/hive/warehouse/hive_test.db
默认的hive数据库没有default.db的路径/user/hive/warehouse/

自定义创建路径

hive> create DATABASE hive_test2 LOCATION '/test/hive' > ; OK Time taken: 0.119 seconds hive> [hadoop@hadoop000 network-scripts]$ hadoop fs -ls /test/ Found 1 items drwxr-xr-x - hadoop supergroup 0 2020-09-09 06:29 /test/hive
自定义参数

DESC DATABASE [EXTENDED] db_name; --EXTENDED 表示是否显示额外属性 hive> create DATABASE hive_test3 LOCATION '/test/hive' > with DBPROPERTIES('creator'='jack'); OK Time taken: 0.078 seconds hive> desc database hive_test3 > ; OK hive_test3 hdfs://hadoop000:8020/test/hive hadoop USER Time taken: 0.048 seconds, Fetched: 1 row(s) hive> desc database extended hive_test3; OK hive_test3 hdfs://hadoop000:8020/test/hive hadoop USER {creator=jack} Time taken: 0.018 seconds, Fetched: 1 row(s) hive>
显示当前目录

hive> set hive.cli.print.current.db; hive.cli.print.current.db=false hive> set hive.cli.print.current.db=true; hive (default)>
  • Drop Database
DROP (DATABASE|SCHEMA) [IF EXISTS] database_name [RESTRICT|CASCADE]; hive (default)> show databases; OK default hive hive2 hive_test hive_test3 test Time taken: 0.02 seconds, Fetched: 6 row(s) hive (default)> drop database hive_test3; OK Time taken: 0.099 seconds hive (default)> show databases; OK default hive hive2 hive_test test Time taken: 0.019 seconds, Fetched: 5 row(s) hive (default)>
  • 查找数据库
hive (default)> show databases like 'hive*'; OK hive hive2 hive_test Time taken: 0.024 seconds, Fetched: 3 row(s) hive (default)>
  • 使用数据库
USE database_name;

表操作

  • 创建表
CREATE [TEMPORARY] [EXTERNAL] TABLE [IF NOT EXISTS] [db_name.]table_name --表名 [(col_name data_type [COMMENT col_comment], ... [constraint_specification])] --列名 列数据类型 [COMMENT table_comment] --表描述 [PARTITIONED BY (col_name data_type [COMMENT col_comment], ...)] --分区表分区规则 [ CLUSTERED BY (col_name, col_name, ...) [SORTED BY (col_name [ASC|DESC], ...)] INTO num_buckets BUCKETS ] --分桶表分桶规则 [SKEWED BY (col_name, col_name, ...) ON ((col_value, col_value, ...), (col_value, col_value, ...), ...) [STORED AS DIRECTORIES] ] --指定倾斜列和值 [ [ROW FORMAT row_format] [STORED AS file_format] | STORED BY 'storage.handler.class.name' [WITH SERDEPROPERTIES (...)] ] -- 指定行分隔符、存储文件格式或采用自定义存储格式 [LOCATION hdfs_path] -- 指定表的存储位置 [TBLPROPERTIES (property_name=property_value, ...)] --指定表的属性 [AS select_statement]; --从查询结果创建表 CREATE TABLE emp( empno int , ename string, job string, mgr int, hiredate string, sal double, comm double, deptno int ) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t'; hive> CREATE TABLE emp( > empno int , > ename string, > job string, > mgr int, > hiredate string, > sal double, > comm double, > deptno int > ) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t'; OK Time taken: 0.115 seconds hive> desc formatted emp; OK # col_name data_type comment empno int ename string job string mgr int hiredate string sal double comm double deptno int # Detailed Table Information Database: hive Owner: hadoop CreateTime: Wed Sep 09 09:34:57 CST 2020 LastAccessTime: UNKNOWN Protect Mode: None Retention: 0 Location: hdfs://hadoop000:8020/user/hive/warehouse/hive.db/emp Table Type: MANAGED_TABLE Table Parameters: transient_lastDdlTime 1599615297 # Storage Information SerDe Library: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe InputFormat: org.apache.hadoop.mapred.TextInputFormat OutputFormat: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat Compressed: No Num Buckets: -1 Bucket Columns: [] Sort Columns: [] Storage Desc Params: field.delim \t serialization.format \t Time taken: 0.131 seconds, Fetched: 34 row(s)
  • 加载数据
用DML的加载数据

LOAD DATA [LOCAL] INPATH 'filepath' [OVERWRITE] INTO TABLE tablename [PARTITION (partcol1=val1, partcol2=val2 ...)] LOAD DATA [LOCAL] INPATH 'filepath' [OVERWRITE] INTO TABLE tablename [PARTITION (partcol1=val1, partcol2=val2 ...)] [INPUTFORMAT 'inputformat' SERDE 'serde'] (3.0 or later) LOAD DATA LOCAL INPATH '/home/hadoop/data/emp.txt' OVERWRITE INTO TABLE emp;

hive> LOAD DATA LOCAL INPATH '/home/hadoop/data/emp.txt' OVERWRITE INTO TABLE emp; Loading data to table hive.emp Table hive.emp stats: [numFiles=1, totalSize=700] OK Time taken: 2.482 seconds hive> select * from emp; OK 7369 SMITH CLERK 7902 1980-12-17 800.0 NULL 20 7499 ALLEN SALESMAN 7698 1981-2-20 1600.0 300.0 30 7521 WARD SALESMAN 7698 1981-2-22 1250.0 500.0 30 7566 JONES MANAGER 7839 1981-4-2 2975.0 NULL 20 7654 MARTIN SALESMAN 7698 1981-9-28 1250.0 1400.0 30 7698 BLAKE MANAGER 7839 1981-5-1 2850.0 NULL 30 7782 CLARK MANAGER 7839 1981-6-9 2450.0 NULL 10 7788 SCOTT ANALYST 7566 1987-4-19 3000.0 NULL 20 7839 KING PRESIDENT NULL 1981-11-17 5000.0 NULL 10 7844 TURNER SALESMAN 7698 1981-9-8 1500.0 0.0 30 7876 ADAMS CLERK 7788 1987-5-23 1100.0 NULL 20 7900 JAMES CLERK 7698 1981-12-3 950.0 NULL 30 7902 FORD ANALYST 7566 1981-12-3 3000.0 NULL 20 7934 MILLER CLERK 7782 1982-1-23 1300.0 NULL 10 8888 HIVE PROGRAM 7839 1988-1-23 10300.0 NULL NULL Time taken: 0.363 seconds, Fetched: 15 row(s) hive>
  • 更改表名
ALTER TABLE table_name RENAME TO new_table_name;

Hive DML

Hive Data Manipulation Language

LOAD DATA [LOCAL] INPATH 'filepath' [OVERWRITE] INTO TABLE tablename [PARTITION (partcol1=val1, partcol2=val2 ...)] LOAD DATA [LOCAL] INPATH 'filepath' [OVERWRITE] INTO TABLE tablename [PARTITION (partcol1=val1, partcol2=val2 ...)] [INPUTFORMAT 'inputformat' SERDE 'serde'] (3.0 or later)
  • LOCAL: 有 就是从服务器目录获取文件, 无则从HDFS系统

  • OVERWRITE: 有 表示新建数据 ; 无 表示追加数据

  • INPATH

    • a relative path, such as project/data1
    • an absolute path, such as /user/hive/project/data1
    • a full URI with scheme and (optionally) an authority, such as hdfs://namenode:9000/user/hive/project/data1
创建查询表create table emp_1 as select * from emp;

  • 导出数据
INSERT OVERWRITE LOCAL DIRECTORY '/tmp/hive' ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t' select empno , ename ,sal,deptno from emp; hive> > INSERT OVERWRITE LOCAL DIRECTORY '/tmp/hive' > ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t' > select empno , ename ,sal,deptno from emp; Query ID = hadoop_20200909102020_aeb2ef7d-cf18-4bcb-b903-8c6ea1719626 Total jobs = 1 Launching Job 1 out of 1 Number of reduce tasks is set to 0 since there's no reduce operator Starting Job = job_1599583423179_0001, Tracking URL = http://hadoop000:8088/proxy/application_1599583423179_0001/ Kill Command = /home/hadoop/app/hadoop-2.6.0-cdh5.15.1/bin/hadoop job -kill job_1599583423179_0001 Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0 2020-09-09 10:21:18,074 Stage-1 map = 0%, reduce = 0% 2020-09-09 10:21:29,109 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 5.64 sec MapReduce Total cumulative CPU time: 5 seconds 640 msec Ended Job = job_1599583423179_0001 Copying data to local directory /tmp/hive MapReduce Jobs Launched: Stage-Stage-1: Map: 1 Cumulative CPU: 5.64 sec HDFS Read: 4483 HDFS Write: 313 SUCCESS Total MapReduce CPU Time Spent: 5 seconds 640 msec OK Time taken: 35.958 seconds hive> [hadoop@hadoop000 hive]$ cat 000000_0 7369 SMITH 800.0 20 7499 ALLEN 1600.0 30 7521 WARD 1250.0 30 7566 JONES 2975.0 20 7654 MARTIN 1250.0 30 7698 BLAKE 2850.0 30 7782 CLARK 2450.0 10 7788 SCOTT 3000.0 20 7839 KING 5000.0 10 7844 TURNER 1500.0 30 7876 ADAMS 1100.0 20 7900 JAMES 950.0 30 7902 FORD 3000.0 20 7934 MILLER 1300.0 10 8888 HIVE 10300.0 \N [hadoop@hadoop000 hive]$

Hive QL

  • 基本统计
和普通的sql并无两样

select * from emp where deptno=10;

  • 聚合函数
像这种聚合(max,min,avg,sum)函数就是要跑mappreduce

hive> select count(1) from emp where deptno=10; Query ID = hadoop_20200909104949_1ce185de-2025-4633-9324-3e47f30fb157 Total jobs = 1 Launching Job 1 out of 1 Number of reduce tasks determined at compile time: 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=<number> In order to limit the maximum number of reducers: set hive.exec.reducers.max=<number> In order to set a constant number of reducers: set mapreduce.job.reduces=<number> Starting Job = job_1599583423179_0002, Tracking URL = http://hadoop000:8088/proxy/application_1599583423179_0002/ Kill Command = /home/hadoop/app/hadoop-2.6.0-cdh5.15.1/bin/hadoop job -kill job_1599583423179_0002 Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1 2020-09-09 10:50:00,361 Stage-1 map = 0%, reduce = 0% 2020-09-09 10:50:10,092 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 6.52 sec 2020-09-09 10:50:25,233 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 11.72 sec MapReduce Total cumulative CPU time: 11 seconds 720 msec Ended Job = job_1599583423179_0002 MapReduce Jobs Launched: Stage-Stage-1: Map: 1 Reduce: 1 Cumulative CPU: 11.72 sec HDFS Read: 9708 HDFS Write: 2 SUCCESS Total MapReduce CPU Time Spent: 11 seconds 720 msec OK 3 Time taken: 38.666 seconds, Fetched: 1 row(s) hive> select * from emp where deptno=10; OK 7782 CLARK MANAGER 7839 1981-6-9 2450.0 NULL 10 7839 KING PRESIDENT NULL 1981-11-17 5000.0 NULL 10 7934 MILLER CLERK 7782 1982-1-23 1300.0 NULL 10 Time taken: 0.209 seconds, Fetched: 3 row(s) hive>
  • 分组函数
select deptno , avg(sal) from group by deptno;
注意select的字段没有再聚合函数就要出现再group by 里面

  • join的使用
用于涉及到多表

CREATE TABLE dept( deptno int, dname string, loc string )ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t'; LOAD DATA LOCAL INPATH '/home/hadoop/data/dept.txt' OVERWRITE INTO TABLE dept;

select e.empno,e.ename,e.sal,e.deptno,d.dname from emp e join dept d on e.deptno=d.deptno; hive> select e.empno,e.ename,e.sal,e.deptno,d.dname > from emp e join dept d > on e.deptno=d.deptno; Query ID = hadoop_20200909140808_8635204d-8e8a-4267-8503-ef242f022ebc Total jobs = 1 2020-09-09 02:08:51 Starting to launch local task to process map join; maximum memory = 477626368 2020-09-09 02:08:54 End of local task; Time Taken: 3.023 sec. Execution completed successfully MapredLocal task succeeded Launching Job 1 out of 1 Number of reduce tasks is set to 0 since there's no reduce operator Starting Job = job_1599583423179_0004, Tracking URL = http://hadoop000:8088/proxy/application_1599583423179_0004/ Kill Command = /home/hadoop/app/hadoop-2.6.0-cdh5.15.1/bin/hadoop job -kill job_1599583423179_0004 Hadoop job information for Stage-3: number of mappers: 1; number of reducers: 0 2020-09-09 14:09:06,852 Stage-3 map = 0%, reduce = 0% 2020-09-09 14:09:18,823 Stage-3 map = 100%, reduce = 0%, Cumulative CPU 6.7 sec MapReduce Total cumulative CPU time: 6 seconds 700 msec Ended Job = job_1599583423179_0004 MapReduce Jobs Launched: Stage-Stage-3: Map: 1 Cumulative CPU: 6.7 sec HDFS Read: 7649 HDFS Write: 406 SUCCESS Total MapReduce CPU Time Spent: 6 seconds 700 msec OK 7369 SMITH 800.0 20 RESEARCH 7499 ALLEN 1600.0 30 SALES 7521 WARD 1250.0 30 SALES 7566 JONES 2975.0 20 RESEARCH 7654 MARTIN 1250.0 30 SALES 7698 BLAKE 2850.0 30 SALES 7782 CLARK 2450.0 10 ACCOUNTING 7788 SCOTT 3000.0 20 RESEARCH 7839 KING 5000.0 10 ACCOUNTING 7844 TURNER 1500.0 30 SALES 7876 ADAMS 1100.0 20 RESEARCH 7900 JAMES 950.0 30 SALES 7902 FORD 3000.0 20 RESEARCH 7934 MILLER 1300.0 10 ACCOUNTING Time taken: 46.765 seconds, Fetched: 14 row(s) hive>
  • 执行计划

explain select e.empno,e.ename,e.sal,e.deptno,d.dname from emp e join dept d on e.deptno=d.deptno; hive> explain > select e.empno,e.ename,e.sal,e.deptno,d.dname > from emp e join dept d > on e.deptno=d.deptno; OK STAGE DEPENDENCIES: Stage-4 is a root stage Stage-3 depends on stages: Stage-4 Stage-0 depends on stages: Stage-3 STAGE PLANS: Stage: Stage-4 Map Reduce Local Work Alias -> Map Local Tables: d Fetch Operator limit: -1 Alias -> Map Local Operator Tree: d TableScan alias: d Statistics: Num rows: 1 Data size: 79 Basic stats: COMPLETE Column stats: NONE Filter Operator predicate: deptno is not null (type: boolean) Statistics: Num rows: 1 Data size: 79 Basic stats: COMPLETE Column stats: NONE HashTable Sink Operator keys: 0 deptno (type: int) 1 deptno (type: int) Stage: Stage-3 Map Reduce Map Operator Tree: TableScan alias: e Statistics: Num rows: 6 Data size: 700 Basic stats: COMPLETE Column stats: NONE Filter Operator predicate: deptno is not null (type: boolean) Statistics: Num rows: 3 Data size: 350 Basic stats: COMPLETE Column stats: NONE Map Join Operator condition map: Inner Join 0 to 1 keys: 0 deptno (type: int) 1 deptno (type: int) outputColumnNames: _col0, _col1, _col5, _col7, _col12 Statistics: Num rows: 3 Data size: 385 Basic stats: COMPLETE Column stats: NONE Select Operator expressions: _col0 (type: int), _col1 (type: string), _col5 (type: double), _col7 (type: int), _col12 (type: string) outputColumnNames: _col0, _col1, _col2, _col3, _col4 Statistics: Num rows: 3 Data size: 385 Basic stats: COMPLETE Column stats: NONE File Output Operator compressed: false Statistics: Num rows: 3 Data size: 385 Basic stats: COMPLETE Column stats: NONE table: input format: org.apache.hadoop.mapred.TextInputFormat output format: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe Local Work: Map Reduce Local Work Stage: Stage-0 Fetch Operator limit: -1 Processor Tree: ListSink  

有关Hadoop基础-12-Hive的更多相关文章

  1. postman接口测试工具-基础使用教程 - 2

    1.postman介绍Postman一款非常流行的API调试工具。其实,开发人员用的更多。因为测试人员做接口测试会有更多选择,例如Jmeter、soapUI等。不过,对于开发过程中去调试接口,Postman确实足够的简单方便,而且功能强大。2.下载安装官网地址:https://www.postman.com/下载完成后双击安装吧,安装过程极其简单,无需任何操作3.使用教程这里以百度为例,工具使用简单,填写URL地址即可发送请求,在下方查看响应结果和响应状态码常用方法都有支持请求方法:getpostputdeleteGet、Post、Put与Delete的作用get:请求方法一般是用于数据查询,

  2. 软件测试基础 - 2

    Ⅰ软件测试基础一、软件测试基础理论1、软件测试的必要性所有的产品或者服务上线都需要测试2、测试的发展过程3、什么是软件测试找bug,发现缺陷4、测试的定义使用人工或自动的手段来运行或者测试某个系统的过程。目的在于检测它是否满足规定的需求。弄清预期结果和实际结果的差别。5、测试的目的以最小的人力、物力和时间找出软件中潜在的错误和缺陷6、测试的原则28原则:20%的主要功能要重点测(eg:支付宝的支付功能,其他功能都是次要的)80%的错误存在于20%的代码中7、测试标准8、测试的基本要求功能测试性能测试安全性测试兼容性测试易用性测试外观界面测试可靠性测试二、质量模型衡量一个优秀软件的维度①功能性功

  3. hadoop安装之保姆级教程(二)之YARN的配置 - 2

    1.1.1 YARN的介绍 为克服Hadoop1.0中HDFS和MapReduce存在的各种问题⽽提出的,针对Hadoop1.0中的MapReduce在扩展性和多框架⽀持⽅⾯的不⾜,提出了全新的资源管理框架YARN. ApacheYARN(YetanotherResourceNegotiator的缩写)是Hadoop集群的资源管理系统,负责为计算程序提供服务器计算资源,相当于⼀个分布式的操作系统平台,⽽MapReduce等计算程序则相当于运⾏于操作系统之上的应⽤程序。 YARN被引⼊Hadoop2,最初是为了改善MapReduce的实现,但是因为具有⾜够的通⽤性,同样可以⽀持其他的分布式计算模

  4. Hive SQL 五大经典面试题 - 2

    目录第1题连续问题分析:解法:第2题分组问题分析:解法:第3题间隔连续问题分析:解法:第4题打折日期交叉问题分析:解法:第5题同时在线问题分析:解法:第1题连续问题如下数据为蚂蚁森林中用户领取的减少碳排放量iddtlowcarbon10012021-12-1212310022021-12-124510012021-12-134310012021-12-134510012021-12-132310022021-12-144510012021-12-1423010022021-12-154510012021-12-1523.......找出连续3天及以上减少碳排放量在100以上的用户分析:遇到这类

  5. ES基础入门 - 2

    ES一、简介1、ElasticStackES技术栈:ElasticSearch:存数据+搜索;QL;Kibana:Web可视化平台,分析。LogStash:日志收集,Log4j:产生日志;log.info(xxx)。。。。使用场景:metrics:指标监控…2、基本概念Index(索引)动词:保存(插入)名词:类似MySQL数据库,给数据Type(类型)已废弃,以前类似MySQL的表现在用索引对数据分类Document(文档)真正要保存的一个JSON数据{name:"tcx"}二、入门实战{"name":"DESKTOP-1TSVGKG","cluster_name":"elasticsear

  6. 深度学习12. CNN经典网络 VGG16 - 2

    深度学习12.CNN经典网络VGG16一、简介1.VGG来源2.VGG分类3.不同模型的参数数量4.3x3卷积核的好处5.关于学习率调度6.批归一化二、VGG16层分析1.层划分2.参数展开过程图解3.参数传递示例4.VGG16各层参数数量三、代码分析1.VGG16模型定义2.训练3.测试一、简介1.VGG来源VGG(VisualGeometryGroup)是一个视觉几何组在2014年提出的深度卷积神经网络架构。VGG在2014年ImageNet图像分类竞赛亚军,定位竞赛冠军;VGG网络采用连续的小卷积核(3x3)和池化层构建深度神经网络,网络深度可以达到16层或19层,其中VGG16和VGG

  7. 【网络】-- 网络基础 - 2

    (本文是网络的宏观的概念铺垫)目录计算机网络背景网络发展认识"协议"网络协议初识协议分层OSI七层模型TCP/IP五层(或四层)模型报头以太网碰撞路由器IP地址和MAC地址IP地址与MAC地址总结IP地址MAC地址计算机网络背景网络发展        是最开始先有的计算机,计算机后来因为多项技术的水平升高,逐渐的计算机变的小型化、高效化。后来因为计算机其本身的计算能力比较的快速:独立模式:计算机之间相互独立。    如:有三个人,每个人做的不同的事物,但是是需要协作的完成。    而这三个人所做的事是需要进行协作的,然而刚开始因为每一台计算机之间都是互相独立的。所以前面的人处理完了就需要将数据

  8. ruby-on-rails - 无法构建 gem native 扩展 (mkmf (LoadError)) - Ubuntu 12.04 - 2

    这个问题在这里已经有了答案:Unabletoinstallgem-Failedtobuildgemnativeextension-cannotloadsuchfile--mkmf(LoadError)(17个答案)关闭9年前。嘿,我正在尝试在一台新的ubuntu机器上安装rails。我安装了ruby​​和rvm,但出现“无法构建gemnative扩展”错误。这是什么意思?$sudogeminstallrails-v3.2.9(没有sudo表示我没有权限)然后它会输出很多“获取”命令,最终会出现这个错误:Buildingnativeextensions.Thiscouldtakeawhi

  9. ruby - 使用 OpenSSL ruby​​ 从一个 .p12 文件中提取多个 key - 2

    我想知道如何从Apple.p12文件中提取key。根据我有限的理解,.p12文件是X504证书和私钥的组合。我看到我遇到的每个.p12文件都有一个X504证书和至少一个key,在某些情况下有两个key。这是因为每个.p12都有一个Apple开发人员key,有些还有一个额外的key(可能是Appleroot授权key)。我只考虑那些具有两个key的.p12文件是有效的。我的目标是区分具有一个key的.p12文件和具有两个key的.p12文件。到目前为止,我已经使用OpenSSL来检查X504文件和任何.p12的key。例如,我有这段代码可以检查目录中的所有.p12文件:Dir.glob(

  10. ruby - 为什么 openssl 在 windows 上产生错误但在 centos 上不产生错误:PKCS12_parse: mac verify failure (OpenSSL::PKCS12::PKCS12Error) - 2

    require'openssl'ifARGV.length==2pkcs12=OpenSSL::PKCS12.new(File.read(ARGV[0]),ARGV[1])ppkcs12.certificateelseputs"Usage:load_cert.rb"end运行它会在Windows上产生错误,但在Linux上不会。错误:OpenSSL::PKCS12::PKCS12Error:PKCS12_parse:macverifyfailurefrom(irb):21:ininitializefrom(irb):21:innewfrom(irb):21fromC:/Ruby192/

随机推荐