首页 > 其他分享 >Ranger2.1集成CDH 6.3.2

Ranger2.1集成CDH 6.3.2

时间:2023-12-08 13:23:13浏览次数:33  
标签:CDH hive ranger 6.3 org apache Ranger2.1 2.1 data

Ranger介绍

针对Ranger与CDH平台的集成,需要通过编译ranger的源码,解决兼容性问题。当然,网上也有提供好的tar包,但是这种方式比较适合社区版本。对应的下载地址为: https://mirrors.tuna.tsinghua.edu.cn/apache/ranger/2.4.0/apache-ranger-2.4.0.tar.gz
目前在github上,ranger最新版本为2.4,本次采用的版本采用2.1,版本太新有太多坑要躺。相关的地址如下:

编译前相关配置

2.1 源码编译相关依赖

1)JDK版本:1.8
2)Maven版本:3.6.2

  • 下载maven
[root@hadoop1 packages]# wget https://archive.apache.org/dist/maven/maven-3/3.6.2/binaries/apache-maven-3.6.2-bin.tar.gz
  • 部署安装
[root@hadoop1 packages]# tar zxvf apache-maven-3.6.2-bin.tar.gz -C /data
[root@hadoop1 data]# mv apache-maven-3.6.2/ maven

3)安装Git

[root@hadoop1 ~]# yum install git -y

2.2 源码下载与POM文件修改

2.2.1 从github下载源码

[root@hadoop1 ranger]# git clone --branch release-ranger-2.1.0 https://github.com/apache/ranger.git

这里如果下载较慢的话,可以直接下载zip包到本地,在上传到服务器

2.2.2 修改POM文件

1)在repositories新增以下部分,加快编译速度。

<repository>
     <id>cloudera</id>
     <url>https://repository.cloudera.com/artifactory/cloudera-repos</url>
     <releases>
         <enabled>true</enabled>
     </releases>
     <snapshots>
         <enabled>false</enabled>
     </snapshots>
 </repository>   

2)修改组件为CDH对应的版本

<hadoop.version>3.0.0-cdh6.3.2</hadoop.version>
<hbase.version>2.1.0-cdh6.3.2</hbase.version>
<hive.version>2.1.1-cdh6.3.2</hive.version>
<kafka.version>2.2.1-cdh6.3.2</kafka.version>
<solr.version>7.4.0-cdh6.3.2</solr.version>
<zookeeper.version>3.4.5-cdh6.3.2</zookeeper.version>

主要修改包括hadoop,kafka,hbase等等。这块需要用到啥组件就改对应组件即可。
3)修改ES对应版本

<elasticsearch.version>7.13.0</elasticsearch.version>

跟自己环境版本一致即可。

2.2.3 HIVE版本兼容问题

Apache Ranger 2.1.0 对应hive版本3.1.2,CDH 6.3.2对应hive版本2.1.1,不兼容,hive server启动会报错。

  1. 下载Apache Ranger1.2.0 版本
[root@hadoop1 ranger-1.2]# git clone --branch release-ranger-1.2.0 https://github.com/apache/ranger.git
  1. 删除Apache Ranger 2.1.0 版本的hive插件hive-agent
[root@hadoop1 ranger-1.2]# rm -rf /data/packages/ranger/ranger-2.1/hive-agent/
  1. 将Apache Ranger1.2.0 版本的hive插件hive-agent拷贝到Apache Ranger 2.1.0 目录中。
[root@hadoop1 ranger-1.2]# cp -r ranger/hive-agent/ /data/packages/ranger/ranger-2.1
  1. 使用下面的pom文件替代hive-agent下面的pom
<?xml version="1.0" encoding="UTF-8"?>
<!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
  this work for additional information regarding copyright ownership.
  The ASF licenses this file to You under the Apache License, Version 2.0
  (the "License"); you may not use this file except in compliance with
  the License.  You may obtain a copy of the License at

      http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License.
-->
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <artifactId>ranger-hive-plugin</artifactId>
    <name>Hive Security Plugin</name>
    <description>Hive Security Plugins</description>
    <packaging>jar</packaging>
    <properties>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
    </properties>
    <parent>
        <groupId>org.apache.ranger</groupId>
        <artifactId>ranger</artifactId>
        <version>2.1.0</version>
        <relativePath>..</relativePath>
    </parent>
    <dependencies>
        <dependency>
            <groupId>commons-lang</groupId>
            <artifactId>commons-lang</artifactId>
            <version>${commons.lang.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.hive</groupId>
            <artifactId>hive-common</artifactId>
            <version>2.1.1-cdh6.3.2</version>
        </dependency>
        <dependency>
            <groupId>org.apache.hive</groupId>
            <artifactId>hive-service</artifactId>
            <version>2.1.1-cdh6.3.2</version>
        </dependency>
        <dependency>
            <groupId>org.apache.hive</groupId>
            <artifactId>hive-exec</artifactId>
            <version>2.1.1-cdh6.3.2</version>
        </dependency>
        <dependency>
            <groupId>org.apache.hive</groupId>
            <artifactId>hive-metastore</artifactId>
            <version>2.1.1-cdh6.3.2</version>
        </dependency>
        <dependency>
            <groupId>org.apache.hive</groupId>
            <artifactId>hive-jdbc</artifactId>
            <version>2.1.1-cdh6.3.2</version>
        </dependency>
        <dependency>
            <groupId>org.apache.hive</groupId>
            <artifactId>hive-jdbc</artifactId>
            <version>2.1.1-cdh6.3.2</version>
            <classifier>standalone</classifier>
        </dependency>
        <dependency>
            <groupId>org.apache.ranger</groupId>
            <artifactId>ranger-plugins-common</artifactId>
            <version>${project.version}</version>
        </dependency>

        <dependency>
            <groupId>org.apache.ranger</groupId>
            <artifactId>ranger-plugins-audit</artifactId>
            <version>${project.version}</version>
        </dependency>
        <dependency>
    		<groupId>org.apache.httpcomponents</groupId>
    		<artifactId>httpcore</artifactId>
    		<version>${httpcomponents.httpcore.version}</version>
		</dependency>
        <dependency>
            <groupId>org.apache.ranger</groupId>
            <artifactId>ranger-plugins-common</artifactId>
            <version>2.1.1-SNAPSHOT</version>
            <scope>compile</scope>
        </dependency>
    </dependencies>
    <repositories>
    <repository>
        <id>cloudera</id>
        <url>https://repository.cloudera.com/artifactory/cloudera-repos</url>
        <releases>
            <enabled>true</enabled>
        </releases>
        <snapshots>
            <enabled>false</enabled>
        </snapshots>
    </repository>
    </repositories>
    <build>
        <testResources>
            <testResource>
                <directory>src/test/resources</directory>
                <includes>
                    <include>**/*</include>
                </includes>
                <filtering>true</filtering>
            </testResource>
        </testResources>
    </build>
</project>

这里面主要做了三个操作:

  • 把hive相关的版本替换成2.1.1-cdh6.3.2(按需修改),如果不修改,默认情况下为hive-3.1.2(虽然已经在ranger的pom里面已经配置为hive.version为2.1.1-cdh6.3.2,但是没生效)
  • </dependencies>添加仓库地址,之前在ranger这个父pom里面已经添加了下述的配置,但是实际没有找到
 <repositories>
    <repository>
        <id>cloudera</id>
        <url>https://repository.cloudera.com/artifactory/cloudera-repos</url>
        <releases>
            <enabled>true</enabled>
        </releases>
        <snapshots>
            <enabled>false</enabled>
        </snapshots>
    </repository>
    </repositories>
  • 修改新拷贝过去 hive-agent目录中pom.xml的ranger版本号为2.1.0:

vim /data/packages/ranger/ranger-2.1/hive-agent/pom.xml

    <parent>
        <groupId>org.apache.ranger</groupId>
        <artifactId>ranger</artifactId>
        <version>2.1.0</version>
        <relativePath>..</relativePath>
    </parent>
  1. 添加ranger-plugins-common依赖

<dependencies>下面添加如下依赖,否则找不到添加的代码。

        <dependency>
            <groupId>org.apache.ranger</groupId>
            <artifactId>ranger-plugins-common</artifactId>
            <version>2.1.1-SNAPSHOT</version>
            <scope>compile</scope>
        </dependency>

2.2.4 kylin插件POM修改

1)修改/data/packages/ranger/ranger-2.1/ranger-kylin-plugin-shim/pom.xml 文件

<dependency>
    <groupId>org.apache.kylin</groupId>
     <artifactId>kylin-server-base</artifactId>
     <version>${kylin.version}</version>
     <scope>provided</scope>
     <exclusions>
         <exclusion>
             <groupId>org.apache.kylin</groupId>
             <artifactId>kylin-external-htrace</artifactId>
         </exclusion>
         <exclusion>
             <groupId>org.apache.calcite</groupId>
             <artifactId>calcite-core</artifactId>
         </exclusion>
         <exclusion>
             <groupId>org.apache.calcite</groupId>
             <artifactId>calcite-linq4j</artifactId>
         </exclusion>
     </exclusions>
 </dependency>

2)修改/data/packages/ranger/ranger-2.1/plugin-kylin/pom.xml 文件

<dependency>
    <groupId>org.apache.kylin</groupId>
    <artifactId>kylin-server-base</artifactId>
    <version>${kylin.version}</version>
    <scope>provided</scope>
    <exclusions>
        <exclusion>
            <groupId>org.apache.kylin</groupId>
            <artifactId>kylin-external-htrace</artifactId>
        </exclusion>
        <exclusion>
            <groupId>org.apache.calcite</groupId>
            <artifactId>calcite-core</artifactId>
        </exclusion>
        <exclusion>
            <groupId>org.apache.calcite</groupId>
            <artifactId>calcite-linq4j</artifactId>
        </exclusion>
    </exclusions>
</dependency>

2.2.5 修改distro对应的pom文件

vim /data/packages/ranger/ranger-2.1/distro/pom.xml

<artifactId>maven-assembly-plugin</artifactId>
<!--<version>${assembly.plugin.version}</version>-->
<version>3.3.0</version>

主要是版本修改下,不然编译不过去。

2.3 兼容性源码修改

2.3.1 修改RangerDefaultAuditHandler.java类

vim /data/packages/ranger/ranger-2.1/agents-common/src/main/java/org/apache/ranger/plugin/audit/RangerDefaultAuditHandler.java
在源码的import导入里面添加:

import org.apache.ranger.authorization.hadoop.config.RangerConfiguration;

public class RangerDefaultAuditHandler implements行下面添加如下代码

protected static final String RangerModuleName =  RangerConfiguration.getInstance().get(RangerHadoopConstants.AUDITLOG_RANGER_MODULE_ACL_NAME_PROP , RangerHadoopConstants.DEFAULT_RANGER_MODULE_ACL_NAME);

2.3.2 修改RangerConfiguration.java类:

vim /data/packages/ranger/ranger-2.1/agents-common/src/main/java/org/apache/ranger/authorization/hadoop/config/RangerConfiguration.java
public class RangerConfiguration extends Configuration代码下面添加如下代码:

private static volatile RangerConfiguration config;

public static RangerConfiguration getInstance() {
	RangerConfiguration result = config;
	if (result == null) {
		synchronized (RangerConfiguration.class) {
			result = config;
			if (result == null) {
				config = result = new RangerConfiguration();
			}
		}
	}
	return result;
}

2.3.3 修改RangerElasticsearchPlugin.java类

vim /data/packages/ranger/ranger-2.1/ranger-elasticsearch-plugin-shim/src/main/java/org/apache/ranger/authorization/elasticsearch/plugin/RangerElasticsearchPlugin.java
createComponents方法上面的@Override删除

@Override
public Collection<Object> createComponents

修改为:

public Collection<Object> createComponents

2.3.5 修改ServiceKafkaClient.java类

vim /data/packages/ranger/ranger-2.1/plugin-kafka/src/main/java/org/apache/ranger/services/kafka/client/ServiceKafkaClient.java
38行删除:

 import scala.Option;

87行:

ZooKeeperClient zookeeperClient = new ZooKeeperClient(zookeeperConnect, sessionTimeout, connectionTimeout,
				1, Time.SYSTEM, "kafka.server", "SessionExpireListener", Option.empty());

修改:

ZooKeeperClient zookeeperClient = new ZooKeeperClient(zookeeperConnect, sessionTimeout, connectionTimeout,
				1, Time.SYSTEM, "kafka.server", "SessionExpireListener");

2.3.6 实现getHivePolicyProvider方法

vim /data/packages/ranger/ranger-2.1/hive-agent/src/main/java/org/apache/ranger/authorization/hive/authorizer/RangerHiveAuthorizer.java
public boolean needTransform下面添加如下代码

	@Override
	public HivePolicyProvider getHivePolicyProvider() throws HiveAuthzPluginException {
		if (hivePlugin == null) {
			throw new HiveAuthzPluginException();
		}
		RangerHivePolicyProvider policyProvider = new RangerHivePolicyProvider(hivePlugin, this);

		return policyProvider;
	}

2.4 CDH平台适配 - 配置文件

问题描述:
CDH在重启组件服务时为组件服务独立启动进程运行,动态生成运行配置文件目录和配置文件,ranger插件配置文件部署到CDH安装目录无法被组件服务读取到。
解决方案
/data/packages/ranger/ranger-2.1/agents-common/src/main/java/org/apache/ranger/authorization/hadoop/config/RangerPluginConfig.java中添加copyConfigFile方法:
1)把所有的import导入的类换成如下:

import org.apache.commons.collections.CollectionUtils;
import org.apache.commons.io.FileUtils;
import org.apache.commons.io.filefilter.IOFileFilter;
import org.apache.commons.io.filefilter.RegexFileFilter;
import org.apache.commons.io.filefilter.TrueFileFilter;
import org.apache.commons.lang.StringUtils;
import org.apache.hadoop.conf.Configuration;
import org.apache.log4j.Logger;
import org.apache.ranger.authorization.utils.StringUtil;
import org.apache.ranger.plugin.policyengine.RangerPolicyEngineOptions;

import java.io.File;
import java.net.URL;
import java.util.*;

2)在private Set<String> superGroups代码下面添加如下代码:

private void copyConfigFile(String serviceType) {
	// 这个方法用来适配CDH版本的组件,非CDH组件需要跳出
	if (serviceType.equals("presto")) {
		return;
	}
	// 环境变量
	Map map = System.getenv();
	Iterator it = map.entrySet().iterator();
	while (it.hasNext()) {
		Map.Entry entry = (Map.Entry) it.next();
		LOG.info("env key: " + entry.getKey() + ", value: " + entry.getValue());
	}
	// 系统变量
	Properties properties = System.getProperties();
	Iterator itr = properties.entrySet().iterator();
	while (itr.hasNext()) {
		Map.Entry entry = (Map.Entry) itr.next();
		LOG.info("system key: " + entry.getKey() + ", value: " + entry.getValue());
	}

	String serviceHome = "CDH_" + serviceType.toUpperCase() + "_HOME";
	if ("CDH_HDFS_HOME".equals(serviceHome)) {
		serviceHome = "CDH_HADOOP_HOME";
	}

	serviceHome = System.getenv(serviceHome);
	File serviceHomeDir = new File(serviceHome);
	String userDir = System.getenv("CONF_DIR");
	File destDir = new File(userDir);

	LOG.info("-----Service Home: " + serviceHome);
	LOG.info("-----User dir: " + userDir);
	LOG.info("-----Dest dir: " + destDir);

	IOFileFilter regexFileFilter = new RegexFileFilter("ranger-.+xml");
	Collection<File> configFileList = FileUtils.listFiles(serviceHomeDir, regexFileFilter, TrueFileFilter.INSTANCE);
	boolean flag = true;
	for (File rangerConfigFile : configFileList) {
		try {
			if (serviceType.toUpperCase().equals("HIVE") && flag) {
				File file = new File(rangerConfigFile.getParentFile().getPath() + "/xasecure-audit.xml");
				FileUtils.copyFileToDirectory(file, destDir);
				flag = false;
				LOG.info("-----Source dir: " + file.getPath());
			}
			FileUtils.copyFileToDirectory(rangerConfigFile, destDir);
		} catch (IOException e) {
			LOG.error("Copy ranger config file failed.", e);
		}
	}
}				

3)在addResourcesForServiceType方法第一行添加copyConfigFile的调用:

private void addResourcesForServiceType(String serviceType) {

    copyConfigFile(serviceType);

    String auditCfg    = "ranger-" + serviceType + "-audit.xml";
    String securityCfg = "ranger-" + serviceType + "-security.xml";
    String sslCfg 	   = "ranger-policymgr-ssl.xml";

    if (!addResourceIfReadable(auditCfg)) {
        addAuditResource(serviceType);
    }

    if (!addResourceIfReadable(securityCfg)) {
        addSecurityResource(serviceType);
    }

    if (!addResourceIfReadable(sslCfg)) {
        addSslConfigResource(serviceType);
    }
}

2.5 CDH平台适配 - ENABLE-AGENT.SH配置

问题描述

  • hdfs和yarn插件安装部署后,插件jar包会部署到组件安装目录的share/hadoop/hdfs/lib子目录下,启动hdfs或yarn运行时加载不到这些jar包,会报ClassNotFoundException: Class org.apache.ranger.authorization.yarn.authorizer.RangerYarnAuthorizer not found
  • kafka插件安装部署后,启动运行时会从插件jar包所在目录加载ranger插件配置文件,读不到配置文件会报错addResourceIfReadable(ranger-kafka-audit.xml): couldn't find resource file location

解决方案
修改agents-common模块enable-agent.sh脚本文件:
vim /data/packages/ranger/ranger-2.1/agents-common/scripts/enable-agent.sh

HCOMPONENT_LIB_DIR=${HCOMPONENT_INSTALL_DIR}/share/hadoop/hdfs/lib

修改为:

HCOMPONENT_LIB_DIR=${HCOMPONENT_INSTALL_DIR}

将:

elif [ "${HCOMPONENT_NAME}" = "kafka" ]; then
 
    HCOMPONENT_CONF_DIR=${HCOMPONENT_INSTALL_DIR}/config

修改为:

elif [ "${HCOMPONENT_NAME}" = "kafka" ]; then
 
    HCOMPONENT_CONF_DIR=${PROJ_LIB_DIR}/ranger-kafka-plugin-impl

源码编译

编译

[root@hadoop1 ranger-2.1]# /data/maven/bin/mvn clean package install -Dmaven.test.skip=true -X
或者
[root@hadoop1 ranger-2.1]# /data/maven/bin/mvn clean compile package assembly:assembly install -DskipTests -Drat.skip=true
或者
[root@hadoop1 ranger-2.1]# /data/maven/bin/mvn clean package install  -Dpmd.skip=true -Dcheckstyle.skip=true -Dmaven.test.skip=true

说明:
选择第一种方式编译,会跳过测试代码编译以及测试;
第二种方式编译会跳过测试代码测试,但是不会跳过编译;
第三种方式主要是忽略一些规范问题,比如修改源码时代码或者注释不规范,编译可能会报You have 1 PMD violation,通过这种方式解决即可。
本次使用的最后一个命令最终编译成功。构建过程躺了很多坑,差不多花了两天才解决。相比ranger编译社区版来说,要费劲一些。
image

查看编译软件包

默认在当前工程的target目录下:
image

参考文档:
https://www.freesion.com/article/72991387429/

标签:CDH,hive,ranger,6.3,org,apache,Ranger2.1,2.1,data
From: https://www.cnblogs.com/yjt1993/p/17885930.html

相关文章

  • keydb 6.3.4 发布
    就在10月底keydb发布了6.3.4,fix部分不少,同时添加了一些新功能,比如keydb_modstatsd统计信息支持keydbflash目前还属于beta状态,同时添加了一些新配置,核心还是fix参考资料https://github.com/Snapchat/KeyDB/releases/tag/v6.3.4https://docs.keydb.dev/docs/flash/......
  • 如何在 CentOS 6.3 上安装 libboost-devel
    您需要安装该boost-devel软件包。包描述:boost-devel.x86_64:BoostC++头文件和共享开发库yuminstallboost-develRunCodeOnline(SandboxCodePlaygroud)  ......
  • [Mac软件]Downie 4.6.34视频下载工具
    以下是关于Downie软件的介绍:Downie是一款非常实用的视频下载软件,专门为Mac用户设计。这款软件的使用方法非常简单,只需要将想要下载的视频链接复制到Downie的界面,它就能够自动下载。Downie最大的特点就是支持的网站非常多,目前已经支持上千个不同的网站,包括一些主流的视频分享网站,比......
  • 请查收这份 6.3k star的 Java 攻城狮学习指南!
    大家好,我是Java陈序员。自从一入Java开发的坑,可谓是每天过得神清气爽(水深火热)。每天不是被项目经理赶进度,就是被测试小姐姐追着改Bug!都没有时间好好学习(摸鱼)了!今天给大家介绍一份Java学习指南,无论是新手还是老鸟,皆可食用!新手可以用来学习找工作,老鸟可以用来跳槽面试复习......
  • [Mac软件]Infuse 7 PRO v7.6.3 一个强大的视频播放器(激活版)
    使用Infuse制作您的视频内容,这是在iPhone、iPad、AppleTV和Mac上观看几乎任何格式的视频的好方法。无需转换文件!Infuse针对macOS12进行了优化,具有强大的流媒体选项、Trakt同步以及对AirPlay和字幕的无与伦比的支持。华丽的界面。精确控制。和丝般光滑的繁殖。播放更多类型的视频......
  • Hibench对大数据平台CDH/HDP基准性能测试
    一、部署方式1.1、源码/包:https://github.com/Intel-bigdata/HiBench部署方法:https://github.com/Intel-bigdata/HiBench/blob/master/docs/build-hibench.md注意:hibench执行需hadoop客户端jar包环境如何使用HiBench进行基准测试说明:https://cloud.tencent.com/developer/ar......
  • 326.3的幂
    目录题目法一、迭代法二、递归题目给定一个整数,写一个函数来判断它是否是3 的幂次方。如果是,返回true;否则,返回false。整数n是3的幂次方需满足:存在整数x使得n==3x示例1:输入:n=27输出:true示例2:输入:n=0输出:false示例3:输入:n=9输出:true示......
  • WindowsMobile平台UCWEB6.3 Beta版发布啦
    新增功能:1、自动表单功能:支持保存帐号登录时的填表内容,可以选择是否保存2、光标停留在页面输入框上,右键菜单增加“长文本输入”功能3、下载文件,编辑文件名框增加复制粘贴功能4、网址输入适配加入对已有书签URL的适配5、SP版本在菜单导航中加入快捷菜单入口功能优化:1、......
  • centos 7 + JDK 1.8.0_381+jenkins-2.346.3-1.1环境搭建与安装
    一.查询JDK版本与Jenkins版本对应关系:https://pkg.jenkins.io/redhat-stable/二.centos7安装JDK1.8版本略三.下载并安装jenkinswgethttps://repo.huaweicloud.com/jenkins/redhat-stable/jenkins-2.346.3-1.1.noarch.rpmrpm-ivhjenkins-2.346.3-1.1.noarch.rpm四.......
  • 603-60API资源对象StorageClass、Ceph存储 6.3-6.5
    一、NFS存储使用master-1-230节点做NFS服务器,具体安装步骤参考:https://www.cnblogs.com/pythonlx/p/17766242.html (4.1在master节点搭建NFS)node节点查看NFS挂载目录##showmount-e192.168.1.230Exportlistfor192.168.1.230:/data/kubernetes*/data/nfs_test*......