首页 > 其他分享 >Study Guide v2.1

Study Guide v2.1

时间:2024-11-01 20:01:13浏览次数:2  
标签:int32 arr ++ Study v2.1 complexity heap Guide size

目录

1. Data Structures and Algorithms

1.1. Learning Suggestions: How to learn algorithms?

It’s quite funny that when I threw this question to ChatGPT, it’s answer was all about machine learning algorithms.

I learned a bit about algorithms during my high school time and attended a few algorithm contests (Olympiad of Informatics, inc. CSP-J/S and NOIP). I also asked my honored OI coach, Ms.Shaohong Li, and one of our classmates, Neutral, who even once won a prize in NOI, for advice. I think the suggestions below could be helpful, though many of them may be OI-oriented.

1.1.1. Find out the background of an algorithm.

An algorithm is a method designed for solving a specific sort of problems. To understand an algorithm, first of all you will need to understand the problem it aims to solve.

1.1.2. Make use of visualization.

Get puzzled by those abstract math formulas? There are a lot of websites offering visualization of the algorithm processes. Animation is also easy to find on Youtube and Bilibili.

1.1.3. Try to understand the algorithms’ principles after learning them.

As the Chinese idiom goes, “知其然,知其所以然。(Know how, and know why.)” Not only do you need to learn the steps of an algorithm, but you are also expected to find out why those mathematicians and computer scientists thought this way. On a deeper level it’s about philosophic ideologies. For example, dynamic programming is a reflection of the subproblem idea.

1.1.4. Practice as much as you can.

“Practice makes perfect.” Work on some specific problems. It’s also a good choice to participate in some contests. You should try your best to be familiar with the typical application of those algorithms, so that you’ll be aware when to make use of which algorithm.

1.1.5. Summarize the main ideas and useful tricks.

After finishing a problem (no matter whether you do it on your own or you refer to others’ solution), summarize the process of thinking (why do you think like this?), the main idea of the solution, and the useful tricks (if there exist some).

1.2. Learning Material List

Of course reading books can be boring. Nevertheless, to systematically learn something, it’s unavoidable to read some books. Here are some books I recommend for newcomers to algorithms and connectionist AI.

1.2.1.1. 啊哈磊. 啊哈C语言!逻辑的挑战(修订版)[M]. 北京: 电子工业出版社, 2017.

I highly recommend this book to new Chinese coders as a lead to the C programming language, for its approachable language and comic style, which makes it easy to read even for a pupil. Maybe some readers would ask, “Hey! Isn’t this guide book about algorithms? Why do you refer to this book about C?” As a response, I have to point out that, without a basic understanding of programming, it would be much harder to understand how an algorithm works. Of course you can use other languages like Java or Python or even pseudo-code, yet learning C does help a programmer to comprehend how computers work without totally think like them. And all codes coming soon here will be written in C.

1.2.1.2. 啊哈磊. 啊哈!算法[M]. 北京: 人民邮电出版社, 2014.

Still for Chinese. This book introduces several basic algorithms, inc. sorting, enumeration, searching, shortest path, etc., as well as some simple data structures like stack, queue, linked list, graph and tree. It inherits the same style as 啊哈C语言!逻辑的挑战, making it easy to understand.

1.2.1.3. Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, Clifford Stein. Introduction to Algorithms, fourth edition[M]. Cambridge: The MIT Press, 2022.

Finally an English book. Unlike those mentioned above, this is an academic book that clearly and precisely analyzes a wide range of algorithms, data structures, their complexity, and the philosophic ideology lying behind them.

1.2.2. Websites

1.2.2.1. LeetCode

This website is designed for those practicing basic algorithms and searching for jobs. You can expose yourself to various algorithms, take part in weekly and biweekly contests, and seek jobs. The problems range from easy to a bit hard ---- but most are much easier than those for professional contests.

1.2.2.2. 洛谷(Luogu)

This one is for OIers (those taking part in the Olympiad of Informatics) and ACMers. If you want to win a nice prize in ICPC, then go there and solve those problems. However, before that you are supposed to prepare yourself well with sufficient math knowledge, all kinds of algorithms, advanced data structures (e.g. segment tree, binary indexed tree, red-black tree), enough time, a persistent mind, and a little talent.

1.2.2.3. AtCoder

This Japanese website is designed for holding online contests, almost weekly. There are 3 levels of difficulty: beginner, regular, and grand. As for me, the easiest one is hard enough ... I cannot imagine how niubi are those world-class contestants.

1.2.2.4. VisuAlgo

This website provides visualization of common data structures and algorithms. It will be a lot easier to understand them through instances and animation.

1.3. Subject Matter Details

1.3.1. About Algorithms

1.3.1.1. What are algorithms?

In mathematics and computer science, an algorithm is a finite sequence of mathematically rigorous instructions, typically used to solve a class of specific problems or to perform a computation. That is the conventional definition of algorithms.

In contrast, a heuristic is an approach to solving problems that do not have well-defined correct or optimal results. For example, although social media recommender systems are commonly called "algorithms", they actually rely on heuristics as there is no truly "correct" recommendation.

1.3.1.2. How do we analyze an algorithm?

There are usually 2 aspects of an algorithm that we need to pay attention to: time complexity and space complexity.

In theoretical computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, supposing that each elementary operation takes a fixed amount of time to perform.

The space complexity of an algorithm or a data structure is the amount of memory space required to solve an instance of the computational problem as a function of characteristics of the input. It is the memory required by an algorithm until it executes completely. This includes the memory space used by its inputs, called input space, and any other (auxiliary) memory it uses during execution, which is called auxiliary space.

A set of symbols is adopted to describe the estimation of an algorithm’s complexity. In layman's terms, the big O notation ("$O(...)$") marks the upper bound, the big Omega notation (“$\Omega(...)$”) marks the lower bound, and the Theta notation (“$\Theta(...)$”) describes both. The small o notation (“$o(...)$”) marks something much bigger, and the small omega notation (“$\omega(...)$”) much smaller. Explanation here is rather not rigorous, but I do not want to copy a heap of formulas here ---- nobody would be interested. Usually we only need to care about the big O and Theta. Things filling the ellipses are an expression that shows the overall scale and ignores details including all constant coefficients and terms that are insignificant (e.g. $O(n^2\log n)$). Single elementary operations and single variables are considered to be on a constant scale, written as $O(n^0)$ or $O(1)$.

1.3.1.3. Can time complexity of solutions to a particular problem be reduced infinitely?

In other words, is there a lower bound on the time complexity of solutions to a specific problem?

The answer is yes. Algorithms cannot be optimized infinitely. Each certain problem has its own lower bound on time complexity, up to what problem it actually is. For instance, the minimum time complexity expectation of sorting algorithms is $O(n\log n)$. That cannot be any lower, no matter how smart you are. It's the limit of our world.

1.3.2. About Data Structures

1.4. Case Study

1.4.1. Data Structures

1.4.1.1. Stack

1.4.1.2. Queue

1.4.2. Algorithms

1.4.2.1. Prime Algorithms

Question: Given an integer $n\in[2,10^6]$, print out all primes less than it.

1.4.2.1.1. The Naive Algorithm

A prime number is a natural number greater than $1$ that is not a product of two smaller natural numbers, which means, $x\in\mathbb{N}$ is a prime if it satisfies that for all $p,q\in\mathbb{N},x\neq p\cdot q$.

According to this definition we can directly come up with a naive algorithm: enumerate all integers $x\in[2,n)$, and for each $x$ we enumerate $p,q<x$ and check whether $x=p\cdot q$.

#include <stdbool.h>
#include <stdint.h>
#include <stdio.h>

bool is_prime(uint32_t x) {
	for (uint32_t p = 2; p < x; ++p) {
		for (uint32_t q = 2; q < x; ++q) {
			if (x == p * q) {
				return false;
			}
		}
	}
	return true;
}

int main(void) {
	uint32_t n;
	scanf("%u", &n);
	for (uint32_t x = 2; x < n; ++x) {
		if (is_prime(x)) {
			printf("%u ", x);
		}
	}
	putchar('\n');
	return 0;
}

Its time complexity is $\Theta(n^3)$ (each nest of for-loop an $n$) and space $\Theta(1)$ (no arrays or recursion).

Thinking a little about division and modulo operation, we can immediately optimize it to $\Theta(n^2)$ time. Just modify a bit of the is_prime function, using “trial division method” (try to let $x$ divided by $p$ and see whether it can be divided exactly).

bool is_prime(uint32_t x) {
	for (uint32_t p = 2; p < x; ++p) {
		if (x % p == 0) {
			return false;
		}
	}
	return true;
}

Furthermore, if $x$ does have at least one factor $p\neq 1,\ p\neq x$, then definitely $\min{p, \frac{x}{p}}\leq\sqrt{x}$, so the upper bound of $p$ can be lowered to $\sqrt{x}$. As a result the time complexity is lowered to $O(n\sqrt{n})$.

bool is_prime(uint32_t x) {
	for (uint32_t p = 2; p * p <= x; ++p) {
		if (x % p == 0) {
			return false;
		}
	}
	return true;
}

We can also optimize a little on a constant level because all even numbers except $2$ are composite numbers. The time complexity is still $O(n\sqrt{n})$.

int main(void) {
	uint32_t n;
	scanf("%u", &n);
	printf("2 ");
	for (uint32_t x = 3; x < n; x += 2) {
		if (is_prime(x)) {
			printf("%u ", x);
		}
	}
	putchar('\n');
	return 0;
}
1.4.2.1.2. Sieve of Eratosthenes

Eratosthenes of Cyrene (Greek: Ἐρατοσθένης) was an Ancient Greek polymath: a mathematician, geographer, poet, astronomer, and music theorist. More than 3000 years ago he introduced the sieve of Eratosthenes, an efficient method of identifying prime numbers and composite numbers.

So let’s see what he was thinking: Given any number $x\in\mathbb{N},\ x>1$, we know that for any $i\in\mathbb{N},\ i>1$, $i\cdot x$ is a composite number rather than prime, so we can mark it as “not prime”. If we enumerate $x$ in an increasing order, when $x$ is a composite, it’s multiples have already been marked by it’s prime factors. Therefore, we only need to mark as composite the multiples of primes. This process is called sieving figuratively.

I’ve got a .gif animation from Wikipedia that visualizes Sieve of Eratosthenes. However, probably it cannot be played normally in this kind of document. If you want to take a look, open that file with an appropriate viewer.

Optimization: let $i$ start from $x$ because for $i<x,\ i\cdot x$ has already been marked.

#include <stdbool.h>
#include <stdint.h>
#include <stdio.h>

int main(void) {
#define N 1000000
	uint32_t n;
	bool not_prime[N] = {true, true};
	scanf("%u", &n);
	printf("2 ");
	for (uint32_t x = 2; x < n; x += 2) {
		not_prime[x] = true;
	}
	for (uint32_t x = 3; x * x < n; x += 2) {
		if (!not_prime[x]) {
			for (uint32_t i = x; i * x < n; ++i) {
				not_prime[i * x] = true;
			}
		}
	}
	for (uint32_t x = 3; x < n; x += 2) {
		if (!not_prime[x]) {
			printf("%u ", x);
		}
	}
	putchar('\n');
	return 0;
}

Time complexity: $O(n\log\log n)$ ((The logarithms are a consequence of the fact that the prime harmonic series asymptotically approaches $O(\log\log n)$), which is kind of complicated to derive. Ignore it if you do not understand the proof above). Space complexity: $\Theta(n)$.

1.4.2.1.3. Euler’s/Linear Sieve

Failing to find any corresponding info, I’m not sure whether it was that Euler that came up with this solution.

If you noticed that composite numbers are marked for several times each in Sieve of Eratosthenes, you must have got some super talent! We can really optimize (theoretically, because practically with some tricks, Sieve of Eratosthenes can be even faster despite its bigger time complexity) our algorithm by solving this problem.

Look at the code below. Each composite number is marked only once by its minimum prime factor, which is the by-product of this algorithm. That’s to say, for each composite number $x\in\mathbb{N},\ x\in[2,n)$, let $p$ be its minimum factor and then it will be marked only as the product $p\cdot\frac{x}{p}$. $p$ is $prime[j]$ in the code, and the latter is $i$.

As a result, we lower the time complexity to $\Theta(n)$ because each composite number is marked only once. Space complexity still $\Theta(n)$. Nevertheless, both with larger constant coefficients.

#include <stdbool.h>
#include <stdint.h>
#include <stdio.h>

int main(void) {
#define N 1000000
	uint32_t n;
	bool not_prime[N] = {true, true};
	size_t m = 1;			  // number of primes
	uint32_t primes[N] = {2}; // a list of primes identified
	scanf("%u", &n);
	printf("2 ");
	for (uint32_t i = 3; i < n; i += 2) {
		if (!not_prime[i]) {
			primes[m++] = i; // add i to the list of primes
		}
		for (size_t j = 0; j != m && primes[j] * i < n; ++j) {
			not_prime[primes[j] * i] = true;
			// here we get the minimum prime factor of (i * primes[j]) for good measure -- primes[j]
			if (i % primes[j] == 0) {
				break;
			}
		}
	}
	for (uint32_t x = 3; x < n; x += 2) {
		if (!not_prime[x]) {
			printf("%u ", x);
		}
	}
	putchar('\n');
	return 0;
}

1.4.2.2. Sorting Algorithms

Question: Given an integer $n\in[1,10^6]$ and an integer array $a[]$ of $n$ elements that satisfies $\forall i\in\mathbb{N},\ i<n:a[i]\in[-2{31},2)$. Sort $a[]$ in an ascending order. (A more abstract formulation is that given a compare function $f(x,y)\in{\text{false},\text{true}}$ and sort $a[]$ in such an order that $\forall i\in\mathbb{N}^*,\ i<n:comp(a[i-1], a[i])=\text{true}$. However, to make things easier, here we just let $comp$ be less(<).)

1.4.2.2.1. Selection Sort

I think as a human, selection sort is the most natural way of sorting for me. When working on some small-scale mathematical problems, I do it in this way.

The idea is very simple. Find the minimum element of the disordered elements each time and move it to the end of sorted sequence. Repeat for $n$ times, and you get the sorted array.

Here I show a trick of swapping two variables, taking advantage of the feature of exclusive or operation ($\text{xor}(\oplus)$). However, it doesn’t matter at all. You can just declare a variable to store one temporarily.

#include <stdint.h>
#include <stdio.h>

void swap_int32(int32_t *p, int32_t *q) {
	if (p != q) {
		*p ^= *q;
		*q ^= *p;
		*p ^= *q;
	}
}

int main(void) {
#define N 1000000
	size_t n;
	int32_t arr[N];
	scanf("%zu", &n);
	for (size_t i = 0; i != n; ++i) {
		scanf("%d", arr + i);
	}
	for (size_t i = 0, j, i_min; i != n; ++i) {
		i_min = i;
		for (j = i + 1; j != n; ++j) {
			if (arr[j] < arr[i_min]) {
				i_min = j;
			}
		}
		swap_int32(arr + i, arr + i_min);
	}
	printf("%d", arr[0]);
	for (size_t i = 1; i != n; ++i) {
		printf(" %d", arr[i]);
	}
	putchar('\n');
	return 0;
}

Time complexity: $O(n^2)$. More precisely it’s $T(n)=\sum_{i=0}^{n-1}(n-i)=\frac{1}{2}n(n+1)$. Space complexity is $\Theta(1)$ if the essential space to store the array is ignored.

1.4.2.2.2. Bubble Sort

This method is also among the simplest. Check whether there exists such an integer $i\in[1,n)$ that $a[i-1]>a[i]$ (Here we do not use $\geq$ because we allow adjacent elements to be the same). If so, swap $a[i-1]$ and $a[i]$. Repeat this process until no such $i$ is found.

#include <stdbool.h>
#include <stdint.h>
#include <stdio.h>

void swap_int32(int32_t *p, int32_t *q) {
	if (p != q) {
		*p ^= *q;
		*q ^= *p;
		*p ^= *q;
	}
}

int main(void) {
#define N 1000000
	size_t n;
	int32_t arr[N];
	scanf("%zu", &n);
	for (size_t i = 0; i != n; ++i) {
		scanf("%d", arr + i);
	}
	for (bool flag = true; flag;) {
		flag = false;
		for (size_t i = 1; i != n; ++i) {
			if (arr[i - 1] > arr[i]) {
				flag = true;
				swap_int32(arr + (i - 1), arr + i);
			}
		}
	}
	printf("%d", arr[0]);
	for (size_t i = 1; i != n; ++i) {
		printf(" %d", arr[i]);
	}
	putchar('\n');
	return 0;
}

Time complexity: $O(n^2)$. The optimal case is that all numbers given are already in order ($\Omega(1)$). The worst is that all numbers are in a descending order, which needs $\frac{1}{2}n(n+1)$ operations. Space is $\Theta(1)$ if the essential space for storing the array is ignored.

1.4.2.2.3. Insertion Sort

Let’s first consider how to insert a number into an array. We need to first move one step forward the elements at and subsequent to the target position. Then we can put the new element in the emptied blank.

Insertion sort is an idea that inserts the disordered elements one by one into appropriate positions of the ordered sequence. A simple way to find such an “appropriate position” is, for each unsorted element $a[i]$, to check the elements one by one to find an index $j\in\mathbb{N},\ j\leq i$ ($i$ is at the same time the size of the sorted portion) that is the first position in which $a[i]$ can be inserted without changing the ordering.

If you have learned a little more, you’ll be aware that there exists a faster way, binary search, as the sorted part is monotone.

#include <stdint.h>
#include <stdio.h>

int main(void) {
#define N 1000000
	size_t n;
	int32_t arr[N];
	scanf("%zu", &n);
	for (size_t i = 0; i != n; ++i) {
		scanf("%d", arr + i);
	}
	for (size_t i = 0, j, k; i != n; ++i) {
		if (arr[i] <= arr[0]) {
			int32_t t = arr[i];
			for (j = i; j; --j) {
				arr[j] = arr[j - 1];
			}
			arr[0] = t;
		} else {
			for (j = 1; j != i; ++j) {
				if (arr[j - 1] < arr[i] && arr[i] <= arr[j]) {
					break;
				}
			}
			int32_t t = arr[i];
			for (k = i; k > j; --k) {
				arr[k] = arr[k - 1];
			}
			arr[j] = t;
		}
	}
	printf("%d", arr[0]);
	for (size_t i = 1; i != n; ++i) {
		printf(" %d", arr[i]);
	}
	putchar('\n');
	return 0;
}
#include <bits/stdc++.h>

int main() {
	size_t n;
	std::cin >> n;
	std::vector<int32_t> arr(n);
	for (size_t i = 0; i != n; ++i) {
		std::cin >> arr[i];
	}
	for (size_t i = 0; i != n; ++i) {
		auto iter = std::lower_bound(arr.begin(), arr.begin() + i, arr[i]);
		arr.insert(iter, arr[i]);
	}
	std::cout << arr[0];
	for (size_t i = 1; i != n; ++i) {
		std::cout << ' ' << arr[i];
	}
	std::cout << std::endl;
	return 0;
}

No matter whether binary search is adopted, the time complexity is $O(n^2)$, because insertion takes $O(n)$. Space complexity is $\Theta(1)$ if the essential space for storing the array is ignored.

1.4.2.2.4. Bucket Sort

Suppose that $a[i]\in[0,10^6)$. If so, we can have a non-comparison-based sorting idea, counting sort, which means that we count the times each number appears. Afterwards, enumerate every possible number. How many times it appears in the original array, repeat it how many times here. If it never appears, see it as $0$ times.

#include <stdint.h>
#include <stdio.h>

int main(void) {
#define M 1000000
	size_t n, cnt[M] = {0};
	scanf("%zu", &n);
	for (size_t i = 0; i != n; ++i) {
		int32_t t;
		scanf("%d", &t);
		++cnt[t];
	}
	for (int32_t i = 0; i != M; ++i) {
		for (size_t j = 0; j != cnt[i]; ++j) {
			printf("%d ", i);
		}
	}
	putchar('\n');
	return 0;
}

Time complexity is $O(M+n)$, in which $M$ denotes the range length of array elements. Space complexity is $O(M+n)$. Counting algorithm is suitable for arrays whose elements are distributed relatively evenly in a small range.

However, usually, array elements can be in such a wide range (e.g. $[-2{31},2)$) that it’s too slow to store the entire interval. An alternative is to divide it into several subintervals. Sort each subinterval and then the whole one.

To be more specific, we prepare a set of “buckets” for those subintervals, and for every $a[i]$, put it into the proper bucket. Sort elements in the buckets.

#include <bits/stdc++.h>

int main() {
	constexpr size_t M = (1ull << 32), W = (1ull << 12);
	size_t n;
	std::vector<int32_t> buckets[M / W];
	std::cin >> n;
	for (size_t i = 0; i != n; ++i) {
		int32_t t;
		std::cin >> t;
		buckets[(t + M / 2) / W].emplace_back(t);
	}
	for (size_t i = 0; i != M / W; ++i) {
		std::sort(buckets[i].begin(), buckets[i].end());
		for (int32_t &x : buckets[i]) {
			std::cout << x << ' ';
		}
	}
	std::cout << std::endl;
	return 0;
}

I used std::sort in my code. If the elements are distributed evenly, the theoretical time complexity is $O(\frac{M}{W}\cdot\frac{n}{W}\log\frac{n}{W})$. Space complexity can be regarded as $\Theta(n)$ because I used std::vector<>.

It’s to a certain extent complicated to code this algorithm in pure C, for which I used C++.

1.4.2.2.5. Radix Sort

First, let’s learn something about dictionary order.

Narrow dictionary order is just for strings. You may have looked up a word in a dictionary before. We define the less-than operation of strings in this way: For two strings, if their first letters are different, then the inequality is true when the first letter of one is prior to that of the other in the alphabet (usually ASCII in computer science), and false otherwise. If their first letters are the same, then we compare the next position, repeating this process until we find a pair of different letters.

Generalized dictionary order is for tuples. This means comparing the first elements of two tuples; if they are the same, then compare the next, and repeat.

You may need to understand these two concepts to fully grasp how this sorting algorithm works. A sorting algorithm is stable if the relative order of identical elements does not change after sorting. Otherwise, it is called unstable. You might think, “Since they are the same, why does their order matter?” Actually, this thought is due to the simplicity of our sorting requirement. If the elements of our array are$(key, value)$ pairs and we want to sort them by $key$, it will make sense.

To make the thing easy, here we define $a[i]\in[0,2^{32})$.

Now let’s have a look at the idea of radix sort. Every time we focus on a digit position of the elements (blanks considered as $0$). Through a stable bucket sort, we can sort the array by the digit at some position. Traverse the digits from lower to upper. (Think why. (A hint: stable bucket sort).

Still do not understand? Create some samples, do it by hand and see what it is doing.)

Decimal is not a must. As programmers, we tend to use binary, octal and hexadecimal. Here I recommend base-$256$ (the number of values a byte can represent).

#include <bits/stdc++.h>

int main() {
	constexpr size_t B = 256, LOG = 4;
	size_t n;
	std::cin >> n;
	std::vector<uint32_t> arr(n), buckets[B];
	for (size_t i = 0; i != n; ++i) {
		std::cin >> arr[i];
	}
	for (size_t i = 0, j; i != LOG; ++i) {
		for (j = 0; j != n; ++j) {
			buckets[(arr[j] >> i) & (B - 1)].emplace_back(arr[j]);
		}
		j = 0;
		for (auto &bucket : buckets) {
			for (auto &t : bucket) {
				arr[j++] = t;
			}
			bucket.clear();
		}
	}
	for (auto &t : arr) {
		std::cout << t << ' ';
	}
	std::cout << std::endl;
	return 0;
}

I use C++ here again because we need a dynamic array. Otherwise, the space needed may cause the program to break down.

Time complexity is $O((\lceil\frac{M}{B}+n\rceil)\lceil\log_B{M}\rceil)$, in which $B$ denotes what base you use and $M$ denotes the range size of elements. Space complexity is $O(B+n)$.

1.4.2.2.6. Heap Sort

First, let’s learn about the data structure, heap (note: do not confuse this data structure heap with the heap in memory).

A heap is such a data structure that supports insertion, deletion, and query about the maximum or minimum. The simplest form of a heap is a binary heap, which is a complete binary tree. Specifically, a heap that maintains the minimum is called a min-heap, while one maintaining the maximum is a max-heap.

For information on tree and binary tree, please refer to Wikipedia. I’m not going to explain them here as otherwise it can be too long to read.

Take min-heap for example.

To save space and reduce time wasted on space allocation, we suggest storing this binary tree in an array.

#include <stdint.h>

#define M 1000000
size_t m;
int32_t heap[M];

For the $i$-th node, its left child is node $(2\cdot i+1)$, right child $(2\cdot i+2)$, and parent $\begin{cases}i/2-1&(i\mod 2=0)\(i-1)/2& (i\mod 2=1)\end{cases}=\lfloor\frac{i-1}{2}\rfloor$.

size_t i_left_child(size_t index) { return (2 * index + 1); }
size_t i_right_child(size_t index) { return (2 * index + 2); }
size_t i_parent(size_t index) { return ((index - 1) / 2); }

We need to maintain the tree so that the value of each node is less than or equal to those of its child nodes, i.e. $\forall i\in\mathbb{N},\ 2\cdot i+1<m:heap[i]\leq heap[2\cdot i+1]$, in which $m$ denotes the size of the heap. This rule ensures that the root node always contains the minimum.

int32_t heap_top() { return heap[0]; }

Each time we push in an element, we add it to the end of the heap ($heap[m]$). Afterwards,repeatedly swap this node with its parent while its parent's value is greater than its own.

void swap_int32(int32_t *p, int32_t *q) {
	if (p != q) {
		*p ^= *q;
		*q ^= *p;
		*p ^= *q;
	}
}
void heap_push(int32_t x) {
	heap[m] = x;
	for (size_t i = (m++); i && heap[i_parent(i)] > heap[i]; i = i_parent(i)) {
		swap_int32(heap + i_parent(i), heap + i);
	}
}

And when we pop out an element, we swap the root node with the last one before removing it by letting $m$ reduce by one. Then we swap the new root node down according to the rule.

void heap_pop(void) {
	swap_int32(heap, heap + (--m));
	for (size_t i = 0, j = 1; j < m;) {
		if (j + 1 < m && heap[j] > heap[j + 1]) {
			++j;
		}
		if (heap[i] > heap[j]) {
			swap_int32(heap + i, heap + j);
			i = j;
			j = 2 * i + 1;
		} else {
			break;
		}
	}
}

Above is all about binary heap. Heaps provide an efficient way to dynamically get the minimum or maximum of their elements. Taking advantage of this feature, we can design the heap sort algorithm: push all elements into the heap and take them out one by one. Then they are naturally arranged in ascending order.

#include <stdio.h>

int main(void) {
	size_t n;
	scanf("%zu", &n);
	for (size_t i = 0; i != n; ++i) {
		int32_t x;
		scanf("%d", &x);
		heap_push(x);
	}
	while (m) {
		printf("%d ", heap_top());
		heap_pop();
	}
	putchar('\n');
	return 0;
}
1.4.2.2.7. Merge Sort

Merge Sort involves one of the most important ideas in programming: the divide-and-conquer strategy, which is a technique that splits a problem into independent subproblems, solving each recursively. It divides the entire array into two continuous sub-arrays, each sorted recursively, and then merges the two ordered parts.

Then how do we merge two sorted arrays $a[],b[]$? Easy. Let two pointers respectively point to elements in $a[],b[]$. If $a[i]<b[j]$, then add $a[i]$ to the end of the result sequence (stored in an auxiliary array) and move $i$ forward by one position; otherwise, add $b[j]$ and move $j$.

#include <stdint.h>
#include <stdio.h>
#include <stdlib.h>

void merge_sort(int32_t arr[], size_t size) {
	if (size == 0 || size == 1) {
		return;
	}
	size_t mid = size / 2;
	merge_sort(arr, mid);
	merge_sort(arr + mid, size - mid);
	int32_t *temp = (int32_t *)malloc(size * sizeof(int32_t));
	size_t i = 0, j = 0;
	while (i != mid && mid + j != size) {
		if (arr[i] < arr[mid + j]) {
			temp[i + j] = arr[i];
			++i;
		} else {
			temp[i + j] = arr[mid + j];
			++j;
		}
	}
	while (i != mid) {
		temp[i + j] = arr[i];
		++i;
	}
	while (mid + j != size) {
		temp[i + j] = arr[mid + j];
		++j;
	}
	for (i = 0; i != size; ++i) {
		arr[i] = temp[i];
	}
	free(temp);
}

int main(void) {
#define N 100000
	size_t n;
	int32_t arr[N];
	scanf("%zu", &n);
	for (size_t i = 0; i != n; ++i) {
		scanf("%d", arr + i);
	}
	merge_sort(arr, n);
	for (size_t i = 0; i != n; ++i) {
		printf("%d ", arr[i]);
	}
	putchar('\n');
	return 0;
}

If we analyze calling of the function merge_sort, it’s not hard to notice that it forms a tree-shaped structure. According to the feature of binary tree, the time complexity should be $O(n\log n)$. Due to an auxiliary array $temp$ needed to temporarily store the sorted array, the space complexity is $O(n)$.

1.4.2.2.8. Quick Sort

Quick sort is also an algorithm using divide-and-conquer strategy. What differentiates it from merge sort is that it deals with the whole thing first before dividing it into 2 parts.

The basic idea is to arbitrarily (usually at random, but directly set it to be the first one or last one is also acceptable for random data) pick one element in the given range as a criterion (called pivot). Move all elements less than this special number to its left side, and the others to its right side ---- that's why it's called the criterion. After this process called "partition", we can then respectively sort the both sides of that pivot in the same way (recursion). There are many methods to implement this process. Here I'd like to take one method for example.

First randomly (to reduce the impact of extreme data, to a certain extent) pick an element as the pivot. At first the entire array is in the "unknown zone". Then check the elements one by one and move each to its proper part in "known zone". Here we can maintain the known zone divided into 2 parts: $\forall k\in[0,j):arr[k]\leq pivot$ and $\forall k\in[j,i):pivot<arr[k]$, in which $i$ denotes the number of known elements. When we meet a new element $arr[i]$, if it's less than or equal to $pivot$, swap it with the first known elements greater than $pivot$ ($arr[j]$) and let both $i,j$ increment, as a result of which, this new element is added to the "$\leq pivot$" part; otherwise, just let $i$ increment so that it naturally enters the second part.

#include <stdint.h>
#include <stdlib.h>
#include <time.h>

void swap_int32(int32_t *p, int32_t *q) {
	if (p != q) {
		*p ^= *q;
		*q ^= *p;
		*p ^= *q;
	}
}
void quick_sort(int32_t arr[], size_t size) {
	srand(time(NULL));
	if (size == 0 || size == 1) {
		return;
	}
	swap_int32(arr + rand() % size, arr + size - 1);
	/**
	 * To reach an $O(1)$ space complexity,
	 * we store $pivot$ at the last position of the array.
	 * However, that's not necessary.
	 */
	size_t i = 0, j = 0;
	// arr[0, i) <= pivot < arr[i, j)
	for (; j != size; ++j) {
		if (arr[j] <= arr[size - 1]) {
			swap_int32(arr + (i++), arr + j);
		}
	}
	if (i > 2) {
		quick_sort(arr, i - 1);
	}
	if (i + 1 < size) {
		quick_sort(arr + i, size - i);
	}
}

The expected time complexity is $O(n\log n)$ because each element is visited once in a loop, and each will be visited by $\log n$ loops, since the array is divided into 2 parts, which makes the function called in a binary tree-shaped way. However, the data below (featured in reverse ordered and repeated elements) can cause the complexity to degenerate into $O(n^2)$ (the "greater than" part is always null, and every time the problem scale only decreases 1 instead of being halved), which is too slow.

$$
a[100000]=(100000,99999,...,78002,78001,50000,50000,50000,...,50000,22001,22000,...,2,1)
$$

To solve this problem, we need to adopt a bit of optimization: divide the known zone into 3 parts, respectively "less than", "equal to", and "greater than" the pivot. This is so-called "3-way quick sort". Nevertheless, piling those three parts together would make it hard to add new elements (no longer simply to swap 2 elements). The solution is to put the greater part at the end of the array and it expands in an reverse direction.

void quick_sort(int32_t arr[], size_t size) {
	srand(time(NULL));
	if (size == 0 || size == 1) {
		return;
	}
	swap_int32(arr + rand() % size, arr);
	size_t i = 1, j = 0, k = size;
	// arr[0, j) < arr[j, i) == pivot < arr[k, n)
	for (; i != k; ++i) {
		if (arr[i] < arr[j]) {
			swap_int32(arr + i, arr + (j++));
		} else if (arr[i] > arr[j]) {
			swap_int32(arr + (i--), arr + (--k));
			// i decrements for we needb to check the element originally indexed as k
		}
	}
	quick_sort(arr, j);
	quick_sort(arr + k, size - k);
}

This way, the time complexity expectation is still $O(n\log n)$ but it doesn't degenerate. Space complexity is $O(1)$.

2. Connectionist AI

2.1. Learning Suggestions

2.1. Learning Material List

2.3. Subject Matter Details

标签:int32,arr,++,Study,v2.1,complexity,heap,Guide,size
From: https://www.cnblogs.com/joyce-peng/p/18521157

相关文章

  • KuangStudy-juc
    多线程进阶JUC1.什么是JUC三个包:java.util.concurrentjava.util.concurrent.atomicjava.util.concurrent.locks另外加上java.util.function2.线程和进程Java默认2个线程:main+GCJava可以开启线程吗?-不行->调用nativestart0()利用c++调用底层资源并发、并行并......
  • Dynamic DMA mapping Guide(重要的)
    一、前言这是一篇指导驱动工程师如何使用DMAAPI的文档,为了方便理解,文档中给出了伪代码的例程。另外一篇文档dma-api.txt给出了相关API的简明描述,有兴趣也可以看看那一篇,这两份文档在DMAAPI的描述方面是一致的。二、从CPU角度看到的地址和从DMA控制器看到的地址有什么不同?在DM......
  • DDR Study - LPDDR Write and Training
    参考来源:JESD209-4B,JESD209-4ELPDDRInitial→LPDDRWriteLevelingandDQTraining→LPDDRReadandTraining→LPDDRWriteandTraining→LPDDRClockSwitch→PIMTechnicalWriteCommand基于JEDEC标准中可以看到WriteTiming信息如下:图中的相关参数信......
  • 枫清·天枢多模态智能引擎 V2.1.2 版本正式发布!
    枫清·天枢多模态智能引擎(简称ArcNeural)是以Data-CentricAI为核心理念构建的多模态智能引擎,支持图、向量、表格、时序、JSON等多种数据模态的融合与处理。ArcNeural将传统数据库的“存储&计算”模式升级为更先进的“记忆&推理”架构,可以为企业AI智能应用提供更为便......
  • Study Plan For Algorithms - Part49
    1.交错字符串给定三个字符串s1、s2、s3,请验证s3是否是由s1和s2交错组成的。两个字符串s和t交错的定义与过程如下,其中每个字符串都会被分割成若干非空子字符串:s=s1+s2+...+snt=t1+t2+...+tm|n-m|<=1交错是s1+t1+s2+t2+s3+t3......
  • StudyTonight-Java-中文教程-六-
    StudyTonightJava中文教程(六)原文:StudyTonight协议:CCBY-NC-SA4.0JavaCharacter.isLetter(char)方法原文:https://www.studytonight.com/java-wrapper-class/java-character-isletterchar-ch-methodJavaisLetter(charch)方法是Character类的一部分。此方法用于检查指......
  • StudyTonight-Java-中文教程-二-
    StudyTonightJava中文教程(二)原文:StudyTonight协议:CCBY-NC-SA4.0JavaFloat类原文:https://www.studytonight.com/java/float-class.phpFloat类将基元类型的浮点值包装在对象中。Float类型的对象包含一个类型为浮点的字段。此外,此类提供了几种将浮点转换为字符串和将......
  • StudyTonight-C-C---中文教程-一-
    StudyTonightC/C++中文教程(一)原文:StudyTonight协议:CCBY-NC-SA4.0C基础知识C语言概述原文:https://www.studytonight.com/c/overview-of-c.php欢迎来到C语言教程系列。这是网上最好的C语言教程集,会帮助你学习C语言。C语言是由丹尼斯·里奇于1972年在贝尔......
  • StudyTonight-Web-中文教程-一-
    StudyTonightWeb中文教程(一)原文:StudyTonight协议:CCBY-NC-SA4.0HTMLHTML标签AHTML<a>标签原文:https://www.studytonight.com/html5-references/html-a-tagHTML<a>标签是一个锚点,用来创建一个超链接。超链接用于将当前网页与其他网页或互联网上可用的任何其他网......
  • phpstudy php8.2.9 redis使用
    1、卸载8.22、安装8.23、先开启redis缓存4、设置缓存内存大小5、把所有扩展放在C:\php\ext(复制:D:\phpstudy_pro\Extensions\php\php8.2.9nts\ext到C:\php\ext)6、把redis.dll复制进C:\php\ext(下载路径:https://downloads.php.net/~windows/pecl/releases/redis/6.0.2/php_red......